Alex Tamkin, a PhD student at Stanford University and generative AI specialist, discusses the new era of pre-trained foundation models in AI. Unlike traditional machine learning models trained for specific tasks, these models such as ChatGPT are exposed to a wider range of knowledge from text, images, antibodies, or DNA sequences. This allows for higher interactivity and the ability to generate outputs, not just single judgments.
GPT-3 is a large language model that is part of the new era of pre-trained foundation models in AI.
The traditional machine learning models were trained for specific tasks like classifying emails or matching resumes, but pre-trained foundation models are exposed to a much broader base of knowledge from all the text, images, antibodies, or DNA sequences available on the web.
Pre-trained foundation models provide a base of knowledge that helps the model learn new things faster and interact in an open-ended way with users.
This new era of foundation models allows for a higher level of interactivity and the ability to generate outputs, as opposed to just outputting a single judgment.
The advancement of pre-trained foundation models is attributed to the feeding of more data and the development of new algorithms and ways of processing information.
There has been less use of human feedback principle in text-image models compared to language models.
Generative AI has various applications beyond language models, such as coding, software engineering, design, software automation, administrative workflows, astronomy, genomics, proteins, and everyday tasks and research.
The reliability of these models is a major challenge, and the actual deployment of these models on a large scale is still in R&D.
Organizations need to consider the risks and promises of the technology while integrating it into their workflows.
The integration of AI models into organizations and industries depends on factors such as building in-house models or using API models and also the control and security of sensitive user data. The role of software developers may evolve with the widespread use of AI in software development.
The trust placed in AI depends on the use case and the position of the tool within the organization.
Currently, the most promising applications are in the low stakes and creative domain, where human review is present.
The human component of using the AI system productively is important, and people need to learn to play it like an instrument.
There will be a transformation in the workforce as people learn to work productively with these systems.
Expertise in the field of operation and understanding the specific language used in the field are also important components in working effectively with AI.