Francesco Salvi

ORCID: 0009-0001-6884-6825
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Artificial Intelligence in Healthcare and Education
  • Misinformation and Its Impacts
  • Machine Learning in Healthcare
  • Colorectal and Anal Carcinomas
  • Colorectal Cancer Surgical Treatments
  • Intracerebral and Subarachnoid Hemorrhage Research
  • Artificial Intelligence in Law
  • Hate Speech and Cyberbullying Detection
  • Cerebral Venous Sinus Thrombosis
  • Cancer Diagnosis and Treatment
  • Legal Language and Interpretation
  • Radiomics and Machine Learning in Medical Imaging
  • Gastric Cancer Management and Outcomes
  • Neurosurgical Procedures and Complications
  • Computational and Text Analysis Methods
  • Political Influence and Corporate Strategies

École Polytechnique Fédérale de Lausanne
2024

Fondazione Bruno Kessler
2024

Large language models (LLMs) can potentially democratize access to medical knowledge. While many efforts have been made harness and improve LLMs' knowledge reasoning capacities, the resulting are either closed-source (e.g., PaLM, GPT-4) or limited in scale (<= 13B parameters), which restricts their abilities. In this work, we large-scale LLMs by releasing MEDITRON: a suite of open-source with 7B 70B parameters adapted domain. MEDITRON builds on Llama-2 (through our adaptation Nvidia's...

10.48550/arxiv.2311.16079 preprint EN cc-by arXiv (Cornell University) 2023-01-01

<title>Abstract</title> Can large language models (LLMs) create tailor-made, convincing arguments to promote false or misleading narratives online? Early work has found that LLMs can generate content perceived on par with, even more persuasive than, human-written messages. However, there is still limited evidence regarding LLMs' capabilities in direct conversations with humans—the scenario these are usually deployed at. In this pre-registered study, we analyze the power of AI-driven...

10.21203/rs.3.rs-4429707/v1 preprint EN cc-by Research Square (Research Square) 2024-06-05

The development and popularization of large language models (LLMs) have raised concerns that they will be used to create tailor-made, convincing arguments push false or misleading narratives online. Early work has found can generate content perceived as at least on par often more persuasive than human-written messages. However, there is still limited knowledge about LLMs' capabilities in direct conversations with human counterparts how personalization improve their performance. In this...

10.48550/arxiv.2403.14380 preprint EN arXiv (Cornell University) 2024-03-21

Abstract Large language and multimodal models (LLMs LMMs) will transform access to medical knowledge clinical decision support. However, the current leading systems fall short of this promise, as they are either limited in scale, which restricts their capabilities, closed-source, limits extensions scrutiny that can be applied them, or not sufficiently adapted settings, inhibits practical use. In work, we democratize large-scale AI by developing MEDITRON: a suite open-source LLMs LMMs with 7B...

10.21203/rs.3.rs-4139743/v1 preprint EN cc-by Research Square (Research Square) 2024-04-03

We present a method based on natural language processing (NLP), for studying the influence of interest groups (lobbies) in law-making process European Parliament (EP). collect and analyze novel datasets lobbies' position papers speeches made by members EP (MEPs). By comparing these texts basis semantic similarity entailment, we are able to discover interpretable links between MEPs lobbies. In absence ground-truth dataset such links, perform an indirect validation discovered with dataset,...

10.48550/arxiv.2309.11381 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...