Meta Platforms said on Tuesday that it would provide researchers with access to components of a new “human-like” artificial intelligence model that it said could analyze and complete unfinished images more accurately than existing models.
The model, I-JEPA, uses background knowledge about the world to fill in missing pieces of images, rather than looking only at nearby pixels like other generative AI models, the company said.
That approach incorporates the kind of human-like reasoning advocated by Meta’s top AI scientist Yann LeCun and helps the technology to avoid errors that are common to AI-generated images, like hands with extra fingers, he said.
Meta, which owns Facebook and Instagram, is a prolific publisher of open-sourced AI research via its in-house research lab. Chief Executive Mark Zuckerberg has said that sharing models developed by Meta’s researchers can help the company by spurring innovation, spotting safety gaps and lowering costs.
“For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make,” he told investors in April.
The company’s executives have dismissed warnings from others in the industry about the potential dangers of the technology, declining to sign a statement last month backed by top executives from OpenAI, DeepMind, Microsoft and Google that equaled its risks with pandemics and wars.
Lecun, considered one of the “godfathers of AI,” has railed against “AI doomerism” and argued in favor of building safety checks into AI systems.
Meta is also starting to incorporate generative AI features into its consumer products, such as ad tools that can create image backgrounds and an Instagram product that can modify user photos, both based on text prompts.