Confidence-Building Measures for Artificial Intelligence: Workshop proceedings
Post Content
Post Content
We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more. Post Views: 41
We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more. Post Views: 45
How can MIT’s community leverage generative AI to support learning and work on campus and beyond? At MIT’s Festival of Learning 2024, faculty and instructors, students, staff, and alumni exchanged perspectives about the digital tools and innovations they’re experimenting with in the classroom. Panelists agreed that generative AI should be used to scaffold — not…
We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of…
Large Language Models (LLMs) have emerged and advanced, adding a new level of complexity to the field of Artificial Intelligence. Through intensive training methods, these models have mastered some amazing Natural Language Processing, Natural Language Understanding, and Natural Language Generation tasks such as answering questions, comprehending natural language inference, and summarising material. They have also…
Post Content Post Views: 137