- Nvidia Corporation presented a collection of new technologies that would make it easier for businesses to develop sophisticated natural language processing models.
- Nvidia introduced two cloud services meant to simplify the process of developing AI applications.
Nvidia Corporation presented a collection of new technologies that would make it easier for businesses to develop sophisticated natural language processing models.
BioNeMo, the first product, is a framework to construct natural language processing models that can aid biologists and chemists in their research. In addition to the framework, Nvidia also introduced two cloud-based Artificial Intelligence (AI) services. The first service will facilitate the usage of AI models created using BioNeMo, while the second will accelerate the task of applying neural networks for text processing activities such as summarizing research papers.
Configuration settings, also known as parameters, affect how an AI interprets data and makes judgments. The more parameters an AI model has, the more precisely it can handle data.
Researchers have recently created several natural language processing models with billions of parameters. These neural networks are known as LLMs or massive language models. The most sophisticated LLMs can be applied not only to traditional text processing use cases, such as summarizing research articles but are also capable of creating software code and doing various other activities.
Scientists have found that the processing capabilities of LLMs are advantageous for biomolecular research. BioNeMo, the new framework unveiled by Nvidia, is geared exclusively to train LLMs that can help research in the fields of biology and chemistry. BioNeMo contains features that simplify the deployment of neural networks in production.
According to Nvidia, scientists can use the framework to train LLMs with billions of parameters. In addition, BioNeMo has four language models that can be used to research projects more quickly than neural networks that must be constructed from scratch.
ESM-1 and OpenFold, the first two pre-trained language models, are tuned to predict the characteristics of proteins. BioNeMo also includes the neural network ProtT5, which can produce novel protein sequences. MegaMolBART is the name of the fourth neural network contained in BioNeMo, and it can be utilized for tasks such as predicting how molecules interact with one another.
New cloud services
In addition to BioNeMo, Nvidia introduced two cloud services meant to simplify the process of developing AI applications. Both options provide access to a pre-packaged collection of language models.
The first cloud service, BioNeMo Service, gives access to two language models developed using Nvidia’s recently released BioNeMo framework. The two neural networks have been tuned to facilitate the study of biology and chemistry. According to Nvidia, they can be adjusted with up to billions of parameters.
Nvidia anticipates biotechnology and pharmaceutical firms utilizing the BioNeMo Service to expedite drug discovery. According to the chipmaker, the service can assist scientists in manufacturing novel biomolecules for medicinal uses and executing other medical research-related activities.
Jensen Huang, founder and CEO of Nvidia, stated, “Large language models can disrupt every business. The flexibility to tweak foundation models makes LLMs accessible to millions of developers, who can now construct language services and fuel scientific discoveries without having to design a considerable model from the start.
NeMo LLM Service is the second cloud service that Nvidia launched. It gives access to a set of pre-trained language models with between three billion and 530 billion parameters. The language models can generate text summaries, power chatbots, and write software code.
The neural networks in the NeMo LLM Service have been pre-trained by Nvidia, but organizations can train them further using custom datasets. Acquainting a neural network with a company’s data helps it to process those data more precisely.
Firms can train the AI models in the NeMo LLM Service using a technique known as quick learning. In prompt learning, a neural network is given a phrase fragment, such as “Nvidia produces chips for,” and instructed it to finish the text. Developers can train a neural network to do specific computational tasks by repeating this procedure several times.
The critical advantage of rapid learning over conventional AI training approaches is that it can be significantly faster when utilized in machine learning projects. According to Nvidia, clients may train neural networks offered by the NeMo LLM Service in minutes or hours, as opposed to the typical months required for this job. After training is complete, neural networks can be deployed to the cloud or the on-premises architecture of a business.
The NeMo LLM Service and the BioNeMo LLM Service will be available next month for early access. The beta version of the BioNeMo framework is currently available.