Intel Among Major Companies Committed to Developing Open Generative AI Tools for Enterprises

Is it plausible for generative AI, designed for enterprise use, such as automating reports or spreadsheet formulas, to achieve interoperability? This question is what a group of organizations along with Cloudera and Intel, supported by the Linux Foundation—an organization devoted to open-source initiatives-are trying to answer.

The Linux Foundation on Tuesday announced the kick-off of Open Platform for Enterprise AI (OPEA). This project aims at promoting the development of open, multi-provider, and composable or modular generative AI systems. Under the Linux Foundation’s LF AI and Data department dedicated to AI and data-related initiatives, the goal of OPEA is to make way for the introduction of durable and scalable generative AI systems. These systems will leverage the finest open-source innovation across the ecosystem, as per LF AI and Data’s executive director, Ibrahim Haddad in an official statement.

With OPEA, new possibilities in AI will arise owing to a detailed, and composable framework that stands at the forefront of technology stacks,” quoted Haddad. “This initiative reflects our mission of driving open source innovation and alliance within the AI and data communities under a neutral and open governance model.”

OPEA, one of the Sandbox Projects initiated by the Linux Foundation, includes among its members, enterprise giants such as Intel, Red Hat owned by IBM, Hugging Face, Domino Data Lab, MariaDB, and VMware, alongside Cloudera and Intel.

What might they collectively construct? A few possibilities, including enhanced support for toolchains and compilers related to AI, come to mind, suggests Haddad. These allow AI tasks to function across various hardware pieces, along with pipelines that have a “heterogeneous” design for retrieval-augmented generation or RAG.

RAG’s growing popularity in enterprise applications employing generative AI is quite evident. The reactions and answers of most generative AI models are confined to their training data. However, with RAG, it’s possible to broaden a model’s knowledge base beyond the initial training data boundaries. Before generating a response or carrying out a task, RAG models consult this external data – which could include company specific data, a public database, or a combination of both.

Intel elaborated further in its own press release:

Enterprises often struggle with a do-it-yourself approach to RAG due to the lack of standardised components. This lack of standards prevents the easy deployment of open, interoperable RAG solutions which could accelerate time to market. OPEA seeks to resolve these problems by partnering with industry figures to bring standardisation to components, including frameworks, architectural designs, and reference solutions.

OPEA also focuses on the evaluation aspect.

In their GitHub repository, OPEA has put forth a grading system for AI systems, focusing on four core aspects: performance, features, trustworthiness, and “enterprise-grade” readiness. Performance, according to OPEA, refers to real-life use case benchmarks. Features assess the system’s interoperability, deployment options, and ease of usage. Trustworthiness evaluates an AI model’s capability to ensure quality and durability. Enterprise readiness centres on the prerequisites to launch a system without significant issues.

Rachel Roumeliotis, Intel’s director of open source strategy, maintains that OPEA will collaborate with the open-source community to offer tests based on the grading system, as well as delivering evaluations and grades for generative AI deployments upon request.

OPEA’s additional projects are currently somewhat uncertain. However, Haddad mentioned the possibility of open model development similar to Meta’s growing Llama family and Databricks’ DBRX. As part of this, Intel has already added reference implementations for a generative-AI-driven chatbot, document summarizer, and code generator optimized for its Xeon 6 and Gaudi 2 hardware to the OPEA repo.

Clearly, OPEA’s members are very interested (and self-interested) in creating generative AI tools for businesses. Cloudera recently initiated collaborations to develop what it’s presenting as an “AI ecosystem” in the cloud. Domino has a range of applications for constructing and auditing business-focused generative AI. Meanwhile, last August, VMware — a company focused on the infrastructure aspect of corporate AI — released new “private AI” computation products.

The query here is whether these providers will truly collaborate to construct cross-compatible AI tools under OPEA.

Doing so has a clear advantage. Clients are more than willing to utilize multiple vendors based on their requirements, resources, and budget. However, history has shown that it’s far too tempting to lean towards vendor lock-in. Let’s hope that isn’t the final result in this case.

Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *