Massimo Caccia

About Massimo Caccia

Massimo Caccia, With an exceptional h-index of 89 and a recent h-index of 36 (since 2020), a distinguished researcher at Università degli Studi dell'Insubria, specializes in the field of Physics.

His recent articles reflect a diverse array of research interests and contributions to the field:

Characterisation of a Silicon Photomultiplier Based Oncological Brachytherapy Fibre Dosimeter

Mass-manufacturable scintillation-based optical fiber dosimeters for brachytherapy

Method and system for meaningful counterfactual explanations

WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks?

Design and commissioning of a Silicon Photomultiplier-based dosimeter for Low Dose Rate (LDR) oncological brachytherapy

ORIGIN, an EU project targeting real-time 3D dose imaging and source localization in brachytherapy: Commissioning and first results of a 16-sensor prototype

Test beam results of the fiber-sampling dual-readout calorimeter

Task-Agnostic Continual Reinforcement Learning: Gaining Insights and Overcoming Challenges

Massimo Caccia Information

University

Università degli Studi dell'Insubria

Position

___

Citations(all)

82344

Citations(since 2020)

20159

Cited By

65722

hIndex(all)

89

hIndex(since 2020)

36

i10Index(all)

325

i10Index(since 2020)

93

Email

University Profile Page

Università degli Studi dell'Insubria

Massimo Caccia Skills & Research Interests

Physics

Top articles of Massimo Caccia

Characterisation of a Silicon Photomultiplier Based Oncological Brachytherapy Fibre Dosimeter

Authors

Massimo Caccia,Agnese Giaz,Marco Galoppo,Romualdo Santoro,Micheal Martyn,Carla Bianchi,Raffaele Novario,Peter Woulfe,Sinead O’Keeffe

Journal

Sensors

Published Date

2024/1/30

Source localisation and real-time dose verification are at the forefront of medical research in brachytherapy, an oncological radiotherapy procedure based on radioactive sources implanted in the patient body. The ORIGIN project aims to respond to this medical community’s need by targeting the development of a multi-point dose mapping system based on fibre sensors integrating a small volume of scintillating material into the tip and interfaced with silicon photomultipliers operated in counting mode. In this paper, a novel method for the selection of the optimal silicon photomultipliers to be used is presented, as well as a laboratory characterisation based on dosimetric figures of merit. More specifically, a technique exploiting the optical cross-talk to maintain the detector linearity in high-rate conditions is demonstrated. Lastly, it is shown that the ORIGIN system complies with the TG43-U1 protocol in high and low dose rate pre-clinical trials with actual brachytherapy sources, an essential requirement for assessing the proposed system as a dosimeter and comparing the performance of the system prototype against the ORIGIN project specifications.

Mass-manufacturable scintillation-based optical fiber dosimeters for brachytherapy

Authors

Agnieszka Gierej,Tigran Baghdasaryan,Michael Martyn,Peter Woulfe,Owen Mc Laughlin,Kevin Prise,Geraldine Workman,Sinead O'Keeffe,Kurt Rochlitz,Sergey Verlinski,Agnese Giaz,Romualdo Santoro,Massimo Caccia,Francis Berghmans,Jürgen Van Erps

Journal

Biosensors and Bioelectronics

Published Date

2024/7/1

Scintillation-based fiber dosimeters are a powerful tool for minimally invasive localized real-time monitoring of the dose rate during Low Dose Rate (LDR) and High Dose Rate (HDR) brachytherapy (BT). This paper presents the design, fabrication, and characterization of such dosimeters, consisting of scintillating sensor tips attached to polymer optical fiber (POF). The sensor tips consist of inorganic scintillators, i.e. Gd2O2S:Tb for LDR-BT, and Y2O3:Eu+4YVO4:Eu for HDR-BT, dispersed in a polymer host. The shape and size of the tips are optimized using non-sequential ray tracing simulations towards maximizing the collection and coupling of the scintillation signal into the POF. They are then manufactured by means of a custom moulding process implemented on a commercial hot embossing machine, paving the way towards series production. Dosimetry experiments in water phantoms show that both the HDR-BT …

Method and system for meaningful counterfactual explanations

Published Date

2024/4/16

A computer-implemented method for explaining an image classifier, the method comprising: receiving an initial image, the initial image having been wrongly classified by the image classifier; receiving an initial gradient of a function executed by the image classifier generated while classifying the initial image, the function being indicative of a probability for the initial image to belong to an initial class; converting the initial image into a latent vector, the latent vector being a representation of the initial image in a latent space; generating a plurality of perturbation vectors using the initial gradient of the function executed by the image classifier; combining the latent vector with each one of the plurality of perturbation vectors, thereby obtaining a plurality of modified vectors; for each one of the plurality of modified vectors, reconstructing a respective image, thereby obtaining a plurality of reconstructed images; transmitting the …

WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks?

Authors

Alexandre Drouin,Maxime Gasse,Massimo Caccia,Issam H Laradji,Manuel Del Verme,Tom Marty,Léo Boisvert,Megh Thakkar,Quentin Cappart,David Vazquez,Nicolas Chapados,Alexandre Lacoste

Journal

arXiv preprint arXiv:2403.07718

Published Date

2024/3/12

We study the use of large language model-based agents for interacting with software via web browsers. Unlike prior work, we focus on measuring the agents' ability to perform tasks that span the typical daily work of knowledge workers utilizing enterprise software systems. To this end, we propose WorkArena, a remote-hosted benchmark of 29 tasks based on the widely-used ServiceNow platform. We also introduce BrowserGym, an environment for the design and evaluation of such agents, offering a rich set of actions as well as multimodal observations. Our empirical evaluation reveals that while current agents show promise on WorkArena, there remains a considerable gap towards achieving full task automation. Notably, our analysis uncovers a significant performance disparity between open and closed-source LLMs, highlighting a critical area for future exploration and development in the field.

Design and commissioning of a Silicon Photomultiplier-based dosimeter for Low Dose Rate (LDR) oncological brachytherapy

Authors

A Burdyko,M Caccia,A Giaz,R Santoro,G Tomaciello

Journal

Journal of Instrumentation

Published Date

2024/3/12

Brachytherapy is a radiotherapy procedure performed with radioactive sources implanted into the patient's body, close to the area affected by cancer. This is a reference procedure for the treatment of prostate and gynecologic cancer due to the reduction of the dose released close to organs at risk (eg, rectum, bladder, colon). For this reason, real-time dose verification and source localisation are essential for an optimal treatment plan. The ORIGIN collaboration aims to achieve this goal through a 16-fibre sensor system, designed to house a small volume of scintillating material in a transparent fibre tip to enable point-like measurements. The selected scintillating materials feature a decay time of about 500 μs and the signal associated with the primary γ-ray interaction results in the emission of a sequence of single photons distributed over time. Therefore, the dosimeter requires a detector with single-photon sensitivity …

ORIGIN, an EU project targeting real-time 3D dose imaging and source localization in brachytherapy: Commissioning and first results of a 16-sensor prototype

Authors

A Giaz,M Galoppo,N Ampilogov,S Cometti,Jennifer Hanly,O Houlihan,W Kam,M Martyn,O Mc Laughlin,R Santoro,G Workman,M Caccia,S O’Keeffe

Journal

Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment

Published Date

2023/3/1

The ORIGIN project targets the production and qualification of a real-time radiation dose imaging and source localization system for both Low Dose Rate (LDR) and High Dose Rate (HDR) brachytherapy treatments, namely radiotherapy based on the use of radioactive sources implanted in the patient’s body. This goal will be achieved through a 16-fiber sensor system, engineered to house in a clear-fiber tip a small volume of the scintillator to allow point-like measurements of the delivered dose. Each fiber is optically coupled to a sensor with single photon sensitivity (Silicon Photomultipliers — SiPMs) operating in counting mode. The readout is based on the CITIROC1A ASIC by WEEROC, embedded in the FERS-DT5202 scalable platform designed by CAEN S.p.A. Linearity and sensitivity together with the fiber response uniformity, system stability, and measurement reproducibility are key features for a instrument …

Test beam results of the fiber-sampling dual-readout calorimeter

Authors

A Giaz,R Santoro,N Ampilogov,M Caccia,V Chmill,S Cometti,R Ferrari,G Gaudio,P Giacomelli,A Karadzhinova Ferrer,A Loeschcke Centeno,L Pezzotti,G Polesello,E Proserpio,I Vivarelli

Journal

Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment

Published Date

2023/3/1

The dual-readout calorimetric technique reconstructs the event-by-event electromagnetic fraction of the hadronic shower through the simultaneous measurement of scintillating (S) and Cherenkov (C) light produced by the shower development. The new generation of prototypes, based on Silicon Photomultipliers (SiPMs) readout, adds unprecedented granularity to the well-known high-energy resolution. A highly granular prototype (10× 10× 100 cm 3), designed to fully contain electromagnetic showers, was recently built and qualified on beam. It consists of 9 modules, each made of 320 brass capillaries equipped with both scintillating and clear fibers. All the fibers of the central module are coupled with SiPMs, while the PMTs are used for the others. Furthermore, the new FERS-System, designed by Caen to exploit the CITIROC1A ASICs performances, is at the core of the SiPM readout. The recent test beam at DESY …

Task-Agnostic Continual Reinforcement Learning: Gaining Insights and Overcoming Challenges

Authors

Massimo Caccia,Jonas Mueller,Taesup Kim,Laurent Charlin,Rasool Fakoor

Published Date

2022

Continual learning (CL) enables the development of models and agents that learn from a sequence of tasks while addressing the limitations of standard deep learning approaches, such as catastrophic forgetting. In this work, we investigate the factors that contribute to the performance differences between task-agnostic CL and multi-task (MTL) agents. We pose two hypotheses:(1) task-agnostic methods might provide advantages in settings with limited data, computation, or high dimensionality, and (2) faster adaptation may be particularly beneficial in continual learning settings, helping to mitigate the effects of catastrophic forgetting. To investigate these hypotheses, we introduce a replay-based recurrent reinforcement learning (3RL) methodology for task-agnostic CL agents. We assess 3RL on a synthetic task and the Meta-World benchmark, which includes 50 unique manipulation tasks. Our results demonstrate that 3RL outperforms baseline methods and can even surpass its multi-task equivalent in challenging settings with high dimensionality. We also show that the recurrent task-agnostic agent consistently outperforms or matches the performance of its transformer-based counterpart. These findings provide insights into the advantages of task-agnostic CL over task-aware MTL approaches and highlight the potential of task-agnostic methods in resource-constrained, high-dimensional, and multi-task environments.

Flexible Model Aggregation for Quantile Regression.

Authors

Rasool Fakoor,Taesup Kim,Jonas Mueller,Alex Smola,Ryan J Tibshirani

Journal

Journal of Machine Learning Research

Published Date

2023

Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions, or to model a diverse population without being overly reductive. For instance, epidemiological forecasts, cost estimates, and revenue predictions all benefit from being able to quantify the range of possible values accurately. As such, many models have been developed for this problem over many years of research in statistics, machine learning, and related fields. Rather than proposing yet another (new) algorithm for quantile regression we adopt a meta viewpoint: we investigate methods for aggregating any number of conditional quantile models, in order to improve accuracy and robustness. We consider weighted ensembles where weights may vary over not only individual models, but also over quantile levels, and feature values. All of the models we consider in this paper can be fit using modern deep learning toolkits, and hence are widely accessible (from an implementation point of view) and scalable. To improve the accuracy of the predicted quantiles (or equivalently, prediction intervals), we develop tools for ensuring that quantiles remain monotonically ordered, and apply conformal calibration methods. These can be used without any modification of the original library of base models. We also review some basic theory surrounding quantile aggregation and related scoring rules, and contribute a few new results to this literature (for example, the fact that post sorting or post isotonic regression can only improve the weighted interval score). Finally, we provide an extensive suite of empirical comparisons across 34 data …

The Unsolved Challenges of LLMs as Generalist Web Agents: A Case Study

Authors

Rim Assouel,Tom Marty,Massimo Caccia,Issam H Laradji,Alexandre Drouin,Sai Rajeswar,Hector Palacios,Quentin Cappart,David Vazquez,Nicolas Chapados,Maxime Gasse,Alexandre Lacoste

Published Date

2023/11/8

In this work, we investigate the challenges associated with developing goal-driven AI agents capable of performing novel tasks in a web environment using zero-shot learning. Our primary focus is on harnessing the capabilities of large language models (LLMs) as generalist web agents interacting with HTML-based user interfaces (UIs). We evaluate the MiniWoB benchmark and show that it is a suitable yet challenging platform for assessing an agent's ability to comprehend and solve tasks without prior human demonstrations. Our main contribution encompasses a set of extensive experiments where we compare and contrast various agent design considerations, such as action space, observation space, and the choice of LLM, with the aim of shedding light on the bottlenecks and limitations of LLM-based zero-shot learning in this domain, in order to foster research endeavours in this area. In our empirical analysis, we find that: (1) the effectiveness of the different action spaces are notably dependent on the specific LLM used; (2) open-source LLMs hold their own as competitive generalist web agents when compared to their proprietary counterparts; and (3) using an accessibility-based representation for web pages, despite resulting in some performance loss, emerges as a cost-effective strategy, particularly as web page sizes increase.

Acknowledgment to the Reviewers of Instruments in 2022

Authors

Todd Adams,Alexander Deisting,Wael A Altabey,Nicolas Delerue,Maurizio Angelone,Daniele Dell’Aquila,Francesco Arneodo,Franz Demmel,Marina Artuso,Varga Dezső,Zara Bagdasarian,José Díaz,Woosuk Bang,Sergio JC Do Carmo,Filippo Baruffaldi,Shuo Dong,Alexander Bazilevsky,Francois Drielsma,Jose Bazo,John Matthew Durham,Gabriele Bigongiari,Sarah Eno,Istvan Bikit,Federico Ferri,Kevin Black,Ivor Fleck,Vincent Boudry,Cervelli Franco,James Brau,Ariel Friedman,Marco Bregant,Alessio Galatà,Claudio Bruschini,Claudio Gatti,Massimo Caccia,Gabriella Gaudio,Cristina Carloganu,Elias Gerstmayr,Mateus F Carneiro,Veta Ghenescu,Paolo W Cattaneo,Sowjanya Gollapinni,Francesca Cavallari,Sergio Gonzalez-Sevilla,Susana Cebrián Guajardo,Matthew Gott,Doo-Hee Chang,Gerald Grenier,Claude Chaudet,Stefan Gundacker,Leonardo Chiatti,Richard J Hill,Paul Colas,Robert James Hirosky,Eduardo Cortina Gil,David Hitlin,Mogens Dam,Adrián Irles,Jordan Damgov,Tetsuya Ishikawa,Sridhara Dasu,Roberto Iuppa,Giuseppe Dattoli,Dimka I Ivanova,Gintautas Daunys,David Joffe,Gabor David,Dejan Joković,Tomas Davidek,Thomas W Jones,Cosmin Deaconu,Xiangyang Ju

Published Date

2023

High-quality academic publishing is built on rigorous peer review.[...] of whether the articles they examined were ultimately published, the editors would like to express their appreciation and thank the following reviewers for the time and dedication that they have shown Instruments: Adams, Todd Lee, Lawrence Altabey, Wael A. Li, Shengchao Angelone, Maurizio Lobanov, Artur Arneodo, Francesco Lux, Thorsten Artuso, Marina Maggiore, Mario Bagdasarian, Zara Mandaglio, Giuseppe Bang, Woosuk Mans, Jeremiah Baruffaldi, Filippo Mantovani, Maddalena Bazilevsky, Alexander Maravin, Yurii Bazo, Jose Marcatili, Sara Bigongiari, Gabriele Martin, Philip Bikit, Istvan Menke, Sven Black, Kevin Meschi, Emilio Boudry, Vincent Miramonti, Lino Brau, James Mitsou, Vasiliki Bregant, Marco Mohamed, Abdelrhman Bruschini, Claudio Morange, Nicolas Caccia, Massimo Moskalensky, Alexander E. Carloganu, Cristina …

A Method for Implementing a SHA256 Hardware Accelerator Inside an Quantum True Random Number Generator (QTRNG)

Authors

Kamil Witek,Massimo Caccia,Wojciech Kucewicz,Mateusz Baszczyk,Piotr Dorosz,Łukasz Mik

Published Date

2023/6/29

Availability of streams of random numbers is critical in a number of significant applications. e.g. computer security and cryptography, privacy preservation procedures, secure communication, numerical simulation of complex phenomena, gaming and gambling. Hardware generation of random numbers, especially when based on quantum phenomena, is made unbreakable by the same laws of nature. Random Power [1] [2] is focusing on the development of a Quantum-True Random Number generation (QTRNG) platform, producing unpredictable bit streams analyzing the time series of self-amplified endogenous pulses due to stochastically generated charge carriers in a dedicated silicon device. As per NIST recommendations [3], a SHA256 conditioning function was used in order to reduce potential biases and guarantee the required level of entropy. This paper reports the firmware development and …

Nevis' 22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research

Authors

Jorg Bornschein,Alexandre Galashov,Ross Hemsley,Amal Rannen-Triki,Yutian Chen,Arslan Chaudhry,Xu Owen He,Arthur Douillard,Massimo Caccia,Qixuan Feng,Jiajun Shen,Sylvestre-Alvise Rebuffi,Kitty Stacpoole,Diego de Las Casas,Will Hawkins,Angeliki Lazaridou,Yee Whye Teh,Andrei A Rusu,Razvan Pascanu,Marc’Aurelio Ranzato

Journal

Journal of Machine Learning Research

Published Date

2023

A shared goal of several machine learning communities like continual learning, meta-learning and transfer learning, is to design algorithms and models that efficiently and robustly adapt to unseen tasks. An even more ambitious goal is to build models that never stop adapting, and that become increasingly more efficient through time by suitably transferring the accrued knowledge. Beyond the study of the actual learning algorithm and model architecture, there are several hurdles towards our quest to build such models, such as the choice of learning protocol, metric of success and data needed to validate research hypotheses. In this work, we introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks, sorted chronologically and extracted from papers sampled uniformly from computer vision proceedings spanning the last three decades. The resulting stream reflects what the research community thought was meaningful at any point in time, and it serves as an ideal test bed to assess how well models can adapt to new tasks, and do so better and more efficiently as time goes by. Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth. The diversity is also reflected in the wide range of dataset sizes, spanning over four orders of magnitude. Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks, yet with a low entry barrier as it is limited to a single modality and well understood supervised learning …

Towards compute-optimal transfer learning

Authors

Massimo Caccia,Alexandre Galashov,Arthur Douillard,Amal Rannen-Triki,Dushyant Rao,Michela Paganini,Laurent Charlin,Marc'Aurelio Ranzato,Razvan Pascanu

Journal

arXiv preprint arXiv:2304.13164

Published Date

2023/4/25

The field of transfer learning is undergoing a significant shift with the introduction of large pretrained models which have demonstrated strong adaptability to a variety of downstream tasks. However, the high computational and memory requirements to finetune or use these models can be a hindrance to their widespread use. In this study, we present a solution to this issue by proposing a simple yet effective way to trade computational efficiency for asymptotic performance which we define as the performance a learning algorithm achieves as compute tends to infinity. Specifically, we argue that zero-shot structured pruning of pretrained models allows them to increase compute efficiency with minimal reduction in performance. We evaluate our method on the Nevis'22 continual learning benchmark that offers a diverse set of transfer scenarios. Our results show that pruning convolutional filters of pretrained models can lead to more than 20% performance improvement in low computational regimes.

SISSA: Pulsed Production of Antihydrogen in AEgIS

Authors

N Zurlo,H Sandaker,V Petracek,B Rienäcker,RS Brusa,R Ferragut,C Zimmer,R Santoro,S Gerber,G Testera,M Giammarchi,R Caravita,F Prelz,G Bonomi,G Consolati,E Oswald,OM Røhne,V Lagomarsino,G Nebbia,A Belov,L Nowak,O Khalidova,M Cacciad,F Castelli,S Mariazzi,F Guatieri,P Cheinet,M Fanì,M Oberthaler,A Camper,P Yzombard,A Demetrio,D Pagano,R Müller,V Matveev,C Malbrunot,D Comparat,M Prevedelli,T Wolz,P Nedelec,A Gligorova,M Antonello,A Rotondi,A Hinterberger,D Krasnický,C Amsler,A Kellerbauer,L Penasa,L Di Noto,L Povolo,LT Glöggler,M Doser,V Toso,S Haider,J Fesel,IC Tietje

Published Date

2022

Cold antihydrogen atoms are a powerful tool to probe the validity of fundamental physics laws, and it's clear that colder atoms, generally speaking, allow an increased level of precision. After the first production of cold antihydrogen () in 2002, experimental efforts have progressed continuously (trapping, beam formation, spectroscopy), with competitive results already achieved by adapting to cold antiatoms techniques previously well developed for ordinary atoms. Unfortunately, the number of atoms that can be produced in dedicated experiments is many orders of magnitude smaller than available hydrogen atoms, which are at hand in large amount, so the development of novel techniques that allow the production of with well defined conditions (and possibly control its formation time and energy levels) is essential to improve the sensitivity of the methods applied by the different experiments. We present here the first experimental results concerning the production of in a pulsed mode where the time when 90\% of the atoms are produced is known with an uncertainty of around 250 ns. The pulsed source is generated by the charge-exchange reaction between Rydberg positronium atoms () and trapped antiprotons (), cooled and manipulated in an electromagnetic trap: where Rydberg positronium atoms, in turn, are produced through the implantation of a pulsed positron beam into a mesoporous silica target, and are excited by two subsequent laser pulses, the first to , the second to the needed Rydberg level (). The pulsed production allows the control of the antihydrogen temperature, and facilitates the …

Luminosity determination in collisions at TeV using the ATLAS detector at the LHC

Authors

ATLAS Collaboration

Published Date

2022/12/20

A precise measurement of the integrated luminosity is a key component of the ATLAS physics programme at the CERN Large Hadron Collider (LHC), in particular for cross-section measurements where it is often one of the leading sources of uncertainty. Searches for new physics phenomena beyond those predicted by the Standard Model also often require accurate estimates of the luminosity to determine background levels and sensitivity. This paper describes the measurement of the luminosity of the proton–proton (????????) collision data sample delivered to the ATLAS detector at a centre-of-mass energy of

Characterization of scintillating materials in use for brachytherapy fiber based dosimeters

Authors

S Cometti,Agnieszka Gierej,A Giaz,S Lomazzi,Tigran Baghdasaryan,Jürgen Van Erps,Francis Berghmans,R Santoro,M Caccia,Sinead O’Keeffe

Journal

Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment

Published Date

2022/11/1

This paper reports the characterization of two scintillating materials in powder form, Gadox and YVO embedded in a light-activated resin, used in a probe developed for oncological brachytherapy in-vivo dosimetry. The materials were characterized in terms of internal absorption, scintillation decay time, and light yield. The measurement of the optical characteristics highlighted a significant internal absorption at the scintillation light wavelength, with values of 6.5 dB/mm for Gadox and 14.1 dB/mm for YVO. Measurements of the characteristics scintillation time and of the light yield were performed with a novel method based on single photon counting, profiting from the long decay time of the materials under study. Measurements have been complemented by a two-step simulation with Geant4 to study the energy deposition followed by a ZEMAX OpticStudio® ray tracing to estimate the light collection efficiency. The decay …

Task-agnostic continual reinforcement learning: In praise of a simple baseline

Authors

Massimo Caccia,Jonas Mueller,Taesup Kim,Laurent Charlin,Rasool Fakoor

Published Date

2022

We study task-agnostic continual reinforcement learning (TACRL) in which standard RL challenges are compounded with partial observability stemming from task agnosticism, as well as additional difficulties of continual learning (CL), ie, learning on a non-stationary sequence of tasks. Here we compare TACRL methods with their soft upper bounds prescribed by previous literature: multi-task learning (MTL) methods which do not have to deal with non-stationary data distributions, as well as task-aware methods, which are allowed to operate under full observability. We consider a previously unexplored and straightforward baseline for TACRL, replay-based recurrent RL (3RL), in which we augment an RL algorithm with recurrent mechanisms to address partial observability and experience replay mechanisms to address catastrophic forgetting in CL. Studying empirical performance in a sequence of RL tasks, we find surprising occurrences of 3RL matching and overcoming the MTL and task-aware soft upper bounds. We lay out hypotheses that could explain this inflection point of continual and task-agnostic learning research. Our hypotheses are empirically tested in continuous control tasks via a large-scale study of the popular multi-task and continual learning benchmark Meta-World. By analyzing different training statistics including gradient conflict, we find evidence that 3RL’s outperformance stems from its ability to quickly infer how new tasks relate with the previous ones, enabling forward transfer.

Impact of the detected scintillation light intensity on neutron-gamma discrimination

Authors

Massimo Caccia,Marco Galoppo,Luca Malinverno,Pietro Monti-Guarnieri,Romualdo Santoro

Journal

IEEE Transactions on Nuclear Science

Published Date

2022/9/5

This article reports the method and the results of a study on the impact of the detected light intensity on gamma-neutron discrimination. In particular, the minimum number of photons required to achieve a statistically significant separation was measured and shown to be stable against a variation of the photon detection efficiency (PDE) of the system under study. The method, developed using an EJ-276 scintillator bar coupled to a silicon photomultiplier (SiPM), is of general interest. For this specific system, the minimum statistic was measured to be 317 ± 16 photons, corresponding to different values of the deposited energy, as the PDE was changed.

Evaluation of a novel inorganic scintillator for applications in low dose rate (LDR) brachytherapy using both TE-cooled and room temperature silicon photomultipliers (SiPMs)

Authors

Michael Martyn,Wern Kam,Agnese Giaz,Simona Cometti,Romualdo Santoro,Peter Woulfe,Massimo Caccia,Sinead O'Keeffe

Published Date

2022/5/17

This work considers the use of an optical fiber sensor, employing a Gd2O2S:Tb inorganic scintillator, for applications in LDR brachytherapy for prostate cancer. Gd2O2S:Tb is characterized by a scintillation decay time of ~500 μs, implying that each primary gamma interaction produces a series of single photons, requiring the use of adequate detectors, such as Silicon Photomultipliers (SiPMs). These devices suffer from a significant Dark Count Rate (DCR), undermining system sensitivity. This work reports the result of a feasibility study where identical SiPMs, but different packages, are compared. Specifically, a room temperature SiPM in a ceramic package and a TE-cooled SiPM in a TO8 package. In the former, the optical fiber is in direct contact with the sensor surface, while in the latter there is a separation of ~3 mm. The signal, measured as Photon Count Rate (PCR), in excess of the DCR, was measured in a …

See List of Professors in Massimo Caccia University(Università degli Studi dell'Insubria)

Massimo Caccia FAQs

What is Massimo Caccia's h-index at Università degli Studi dell'Insubria?

The h-index of Massimo Caccia has been 36 since 2020 and 89 in total.

What are Massimo Caccia's top articles?

The articles with the titles of

Characterisation of a Silicon Photomultiplier Based Oncological Brachytherapy Fibre Dosimeter

Mass-manufacturable scintillation-based optical fiber dosimeters for brachytherapy

Method and system for meaningful counterfactual explanations

WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks?

Design and commissioning of a Silicon Photomultiplier-based dosimeter for Low Dose Rate (LDR) oncological brachytherapy

ORIGIN, an EU project targeting real-time 3D dose imaging and source localization in brachytherapy: Commissioning and first results of a 16-sensor prototype

Test beam results of the fiber-sampling dual-readout calorimeter

Task-Agnostic Continual Reinforcement Learning: Gaining Insights and Overcoming Challenges

...

are the top articles of Massimo Caccia at Università degli Studi dell'Insubria.

What are Massimo Caccia's research interests?

The research interests of Massimo Caccia are: Physics

What is Massimo Caccia's total number of citations?

Massimo Caccia has 82,344 citations in total.

    academic-engine

    Useful Links