Meta Unveils its First Open AI Model That Can Process Images

 

Meta has released new versions of its renowned open source AI model Llama, including small and medium-sized models capable of running workloads on edge and mobile devices. 

Llama 3.2 models were showcased at the company’s annual Meta Connect event. They can support multilingual text production and vision apps like image recognition. 

“This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding,” stated Mark Zuckerberg, CEO of Meta.

Llama 3.2 is based on the huge open source model Llama 3.1, which was released in late July. The previous Llama model was the largest open-source AI model in history, with 405 billion parameters (parameters are the adjustable variables within an AI model that help it learn patterns from data). The size shows the AI’s ability to interpret and generate human-like text. 

The new Llama models presented at Meta Connect 2024 are significantly reduced in size. Meta ex

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: