Confidence-Building Measures for Artificial Intelligence: Workshop proceedings
Post Content
Post Content
We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of…
To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. Post Views: 38
We’re forming a new industry body to promote the safe and responsible development of frontier AI systems: advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry. Post Views: 40
We’re developing a blueprint for evaluating the risk that a large language model (LLM) could aid someone in creating a biological threat. In an evaluation involving both biology experts and students, we found that GPT-4 provides at most a mild uplift in biological threat creation accuracy. While this uplift is not large enough to be conclusive,…
Recent advances in deep Reinforcement Learning ( RL ) have demonstrated superhuman performance by artificially intelligent (AI ) agents on a variety of impressive tasks. Current approaches for achieving these results follow developing an agent that primarily learns how to master a narrow task of interest. Untrained agents have to perform these tasks often, and…
We have partnered with international news organizations Le Monde and Prisa Media to bring French and Spanish news content to ChatGPT. Post Views: 33