Kevin Wei

ORCID: 0009-0004-8522-4333
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Ultrasound and Hyperthermia Applications
  • Ultrasound in Clinical Applications
  • Ethics and Social Impacts of AI
  • Cardiac Imaging and Diagnostics
  • Blockchain Technology Applications and Security
  • Explainable Artificial Intelligence (XAI)
  • Law, AI, and Intellectual Property
  • Photoacoustic and Ultrasonic Imaging
  • Angiogenesis and VEGF in Cancer
  • Scientific Computing and Data Management
  • Adversarial Robustness in Machine Learning
  • Radiation Dose and Imaging
  • Aortic Disease and Treatment Approaches
  • Cardiac Ischemia and Reperfusion
  • Cardiac Fibrosis and Remodeling
  • Multi-Agent Systems and Negotiation
  • Alcohol Consumption and Health Effects
  • Hemodynamic Monitoring and Therapy
  • Information and Cyber Security
  • Pharmacological Effects and Toxicity Studies
  • Mobile Crowdsensing and Crowdsourcing
  • Ultrasound and Cavitation Phenomena
  • Renal Transplantation Outcomes and Treatments
  • Smart Cities and Technologies
  • Computability, Logic, AI Algorithms

Keck Hospital of USC
2025

University of Arkansas System
2024

Harvard University Press
2023-2024

Centre for the Governance of AI
2023

Institute on Governance
2023

Brigham and Women's Hospital
2022

Oregon Health & Science University
2006-2014

University of Toronto
1999

Toronto General Hospital
1999

Charlottesville Medical Research
1998

External audits of AI systems are increasingly recognized as a key mechanism for governance. The effectiveness an audit, however, depends on the degree access granted to auditors. Recent state-of-the-art have primarily relied black-box access, in which auditors can only query system and observe its outputs. However, white-box system's inner workings (e.g., weights, activations, gradients) allows auditor perform stronger attacks, more thoroughly interpret models, conduct fine-tuning....

10.1145/3630106.3659037 article EN cc-by 2022 ACM Conference on Fairness, Accountability, and Transparency 2024-06-03

Increased delegation of commercial, scientific, governmental, and personal activities to AI agents—systems capable pursuing complex goals with limited supervision—may exacerbate existing societal risks introduce new risks. Understanding mitigating these involves critically evaluating governance structures, revising adapting structures where needed, ensuring accountability key stakeholders. Information about where, why, how, by whom certain agents are used, which we refer as visibility, is...

10.1145/3630106.3658948 article EN cc-by 2022 ACM Conference on Fairness, Accountability, and Transparency 2024-06-03

This paper presents a survey of local US policymakers' views on the future impact and regulation AI. Our provides insight into expectations regarding effects AI communities nation, as well their attitudes towards specific regulatory policies. Conducted in two waves (2022 2023), captures changes following release ChatGPT subsequent surge public awareness Local policymakers express mix concern, optimism, uncertainty about AI's impacts, anticipating significant societal risks such increased...

10.48550/arxiv.2501.09606 preprint EN arXiv (Cornell University) 2025-01-16

Increasingly many AI systems can plan and execute interactions in open-ended environments, such as making phone calls or buying online goods. As developers grow the space of tasks that agents accomplish, we will need tools both to unlock their benefits manage risks. Current are largely insufficient because they not designed shape how interact with existing institutions (e.g., legal economic systems) actors digital service providers, humans, other agents). For example, alignment techniques by...

10.48550/arxiv.2501.10114 preprint EN arXiv (Cornell University) 2025-01-17

Leading AI developers and startups are increasingly deploying agentic systems that can plan execute complex tasks with limited human involvement. However, there is currently no structured framework for documenting the technical components, intended uses, safety features of systems. To fill this gap, we introduce Agent Index, first public database to document information about deployed For each system meets criteria inclusion in index, system's components (e.g., base model, reasoning...

10.48550/arxiv.2502.01635 preprint EN arXiv (Cornell University) 2025-02-03

Recent decisions by leading AI labs to either open-source their models or restrict access has sparked debate about whether, and how, increasingly capable should be shared. Open-sourcing in typically refers making model architecture weights freely publicly accessible for anyone modify, study, build on, use. This offers advantages such as enabling external oversight, accelerating progress, decentralizing control over development However, it also presents a growing potential misuse unintended...

10.2139/ssrn.4596436 article EN SSRN Electronic Journal 2023-01-01

AimsIt has been reported that imbibing red wine increases coronary blood flow reserve acutely. In the absence of changes in driving pressure, any should occur through a decrease capillary resistance, which turn is determined by dimensions and whole-blood viscosity. Since alcohol intake unlikely to acutely change dimensions, we hypothesized it must increase reducing

10.1093/ejechocard/jeq042 article EN European Journal of Echocardiography 2010-04-08

Recent decisions by leading AI labs to either open-source their models or restrict access has sparked debate about whether, and how, increasingly capable should be shared. Open-sourcing in typically refers making model architecture weights freely publicly accessible for anyone modify, study, build on, use. This offers advantages such as enabling external oversight, accelerating progress, decentralizing control over development However, it also presents a growing potential misuse unintended...

10.48550/arxiv.2311.09227 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Increased delegation of commercial, scientific, governmental, and personal activities to AI agents -- systems capable pursuing complex goals with limited supervision may exacerbate existing societal risks introduce new risks. Understanding mitigating these involves critically evaluating governance structures, revising adapting structures where needed, ensuring accountability key stakeholders. Information about where, why, how, by whom certain are used, which we refer as visibility, is...

10.48550/arxiv.2401.13138 preprint EN public-domain arXiv (Cornell University) 2024-01-01

External audits of AI systems are increasingly recognized as a key mechanism for governance. The effectiveness an audit, however, depends on the degree access granted to auditors. Recent state-of-the-art have primarily relied black-box access, in which auditors can only query system and observe its outputs. However, white-box system's inner workings (e.g., weights, activations, gradients) allows auditor perform stronger attacks, more thoroughly interpret models, conduct fine-tuning....

10.1145/3630106.3659037 preprint EN arXiv (Cornell University) 2024-01-25
Coming Soon ...