Navigating the Fine Line: AI Transparency vs. ‘Open-Washing’ with Endor Labs

As the AI industry shifts its focus toward transparency and security, discussions regarding the concept of "openness" in AI models are intensifying. Experts from open-source security firm Endor Labs shared insights on these critical issues.
Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, highlighted the necessity of borrowing lessons from software security for AI systems. The 2021 Executive Order from the US government on Improving America’s Cybersecurity mandates that organizations provide a software bill of materials (SBOM) for every product meant for federal agency sales. An SBOM serves as a comprehensive list of open-source components in a product, facilitating vulnerability detection. Stiefel posited that mirroring this approach in AI systems is a logical evolution, as enhanced transparency allows for greater security and insight into a model’s datasets and training processes.
Defining "Open" AI Models
Julien Sobrier, Senior Product Manager at Endor Labs, offered further clarity on the ongoing discourse around AI transparency and the notion of "openness." He explained that for an AI model to be genuinely considered open, all elements — including its training set, weights, and associated training programs — must be publicly accessible. The lack of uniformity in definitions among major players has created confusion, originally sparked by concerns from OpenAI and now echoed in discussions around Meta’s LLAMA model, which is perceived as more open. Sobrier cautioned against "open-washing," where organizations falsely claim to be transparent while implementing various restrictions.
He noted that some cloud providers offer paid versions of open-source projects without contributing back to the community, signaling a troubling trend where source code remains open, yet significant commercial limitations are introduced. This trend could also be adopted by other "open" language model providers, such as Meta, as they seek to maintain competitive advantages.
Promoting Transparency with DeepSeek
DeepSeek, a burgeoning company in the AI landscape, has made strides to tackle these challenges by releasing parts of its models and code as open-source, an effort that has been lauded for enhancing transparency and contributing to security improvements. Stiefel mentioned that DeepSeek’s release of model weights and their move toward transparency in hosted services could significantly aid the community in auditing their systems for security vulnerabilities.
DeepSeek’s commitment to transparency also promotes insights into how they manage their AI infrastructure at scale. As transparency becomes more achievable, it allows others to replicate DeepSeek’s hosted services while ensuring security best practices are followed.
The Rise of Open-Source AI
The trend toward open-source AI is steadily gaining momentum. A report by IDC reveals that 60% of organizations now prefer open-source models over commercial alternatives for generative AI projects. Endor Labs’ findings suggest organizations might use between seven and twenty-one open-source models for each application, driven by the need to utilize optimal models for various tasks while managing API costs efficiently.
Stiefel drew attention to how the open-source model community remains vibrant, noting that over 3,500 additional models have been derived from DeepSeek’s original R1 model. Sobrier emphasized that with this growing reliance on open-source AI models, evaluating dependencies becomes crucial to ensure safe and legal utilization.
Addressing AI Model Risks
As the adoption of open-source AI models expands, effective risk management is imperative. Stiefel outlined a structured approach involving three pivotal steps:
- Discovery: Identify the AI models in use within the organization.
- Evaluation: Assess these models for potential security and operational risks.
- Response: Establish and enforce guidelines to guarantee secure model use.
Striking the right balance between fostering innovation and managing risk is essential. The security team must have comprehensive insight into the processes to maintain oversight and accountability.
Sobrier emphasized the necessity for the community to formulate best practices that govern the secure development and employment of AI models, providing a framework for evaluating AI technologies based on security, quality, and openness.
Looking Ahead
Ensuring responsible AI development requires frameworks that address various facets including SaaS models, API integrations, and open-source constructs. Sobrier cautioned against complacency in the fast-evolving AI sector, urging the community to establish robust methodologies to assess models along key parameters.
In summary, as the discourse around AI transparency evolves, organizations must consider security and openness simultaneously to foster a responsible AI ecosystem.
See also: AI in 2025: Purpose-driven models, human integration, and more
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More