Skip to main content

Posts

Future Proofing the Foundation for AI-Ready Security Operations

Last December, the International Telecommunication Union (ITU), the United Nations’ (UN) body for information and communication technologies, supported  Open Cybersecurity Schema Framework (OCSF)  for ratification as an international standard by June 2026. Standardization is now a global necessity as governments worldwide integrate ITU standards into their national cybersecurity policies. First, What is OCSF? The OCSF provides a standardized approach to streamline security operations, improve threat detection, and accelerate incident response. This unlocks the full potential of security data. A standardized schema for security events normalizes data from various sources, which creates a unified foundation for advanced analytics and AI-powered tools. This standardization is crucial for unleashing the full potential of generative AI in cybersecurity, allowing organizations to better identify patterns and correlations across disparate data sources. Data Standardiza...
Recent posts

AI Didn’t Break Your DevOps Pipeline, Your Process was Already Rotten 

AI didn’t sneak into your stack  and quietly sabotage a once-pristine DevOps pipeline . That story is comforting, but it’s fiction. What’s really happening is far less dramatic and a lot more uncomfortable.    Automation has a way of turning small process flaws into loud, impossible-to-ignore failures. AI just does it faster and with more confidence. If your releases feel shakier, alerts feel noisier or postmortems feel more surreal than useful, AI isn’t the villain. It’s the spotlight.    Teams are discovering that the shortcuts, workarounds and undocumented assumptions they’ve been living with for years don’t survive contact with systems that act at machine speed. Hence, this isn’t an argument against AI in DevOps — it’s more of an argument against pretending your process was healthy before you plugged it in.   AI Amplifies Weak Signals You’ve Been Ignoring   DevOps pipelines rarely collapse out of nowhere . They decay quiet...

Critical Cloud Becomes the World’s First “Powered by Datadog” Partner

Cardiff, Wales, March 24th, 2026, CyberNewswire Critical Cloud  today announced that it has become the world’s first partner to achieve the “Powered by Datadog” accreditation, recognising a managed service model built on  Datadog  (NASDAQ: DDOG) as its operational foundation across AWS and Azure environments. “Powered by Datadog” is a premier designation awarded to partners who have deeply embedded Datadog into their managed services and demonstrated technical and onboarding excellence. Each partner undergoes formal technical review by Datadog technical teams, validating architecture, onboarding, governance maturity, and live customer implementations. Achieving “Powered by Datadog” requires partners to hold the Certified Datadog Advanced Partner status and reflects Critical Cloud’s proven ability to use Datadog as the operational backbone of its managed services – helping customers improve reliability, reduce downtime, and accelerate troubleshooting through a unified...

From AI Code to Production: The Case for FeatureOps 

According to the  2025 DORA State of DevOps report , three out of four developers now use AI coding tools daily. That number keeps climbing. By the end of 2026, over 80% of individual developers will rely on AI assistants to write, review and refactor code.   But here’s the problem: The same research found that as AI usage increases, delivery stability tends to decrease. Code ships faster than governance can follow. When developers start accepting AI-generated suggestions without fully understanding subtle issues buried in the logic, the understanding gap between writing code and comprehending its production impact widens.   In other words, speed without control is a false economy.   The Control Gap   When AI generates code at the speed of a keystroke, traditional review cycles struggle to keep up. Pull requests pile up. Code reviews become bottlenecks. Teams feel pressure to approve changes faster, and subtle bugs slip through.   The ...

Two Malicious npm Packages Aim to Steal Credentials and Other Secrets

Bad actors took over a npm maintainer account and have published two malicious packages designed to steal credentials, API keys, and other secrets from the computers of victims who download them from the repository. Analysts with Sonatype’s Security Research Team wrote in a report that the two packages – sbx-mask and touch-adv – likely are more than test packages, with the attackers hijacking the publisher account to take advantage of the trust maintainers build with developers to steal valuable information, in this case, secrets that can include credentials, certificates, or API keys. Sonatype is tracking the packages under  Sonatype-2026-001276  and  Sonatype-2026-001275 , adding that the malware campaign is still active and under investigation. The attacks haven’t been attributed to a threat actor yet. Sonatype reported the packages this week to npm. The malicious packages are only the latest examples of a rising trend of bad actors targeting open code repositori...

The SaaS Observability Era is Ending: Why BYOC Is the Future of Telemetry 

For years, observability was supposed to be the great equalizer . The way every team could understand their systems, debug faster, and ship with confidence. But somewhere along the way, it became the opposite: Complex, expensive, and increasingly constrained.   What was meant to empower developers has become a system governed by egress costs, ingestion pricing, and sampling limits. Teams do not stop observing because they want to. They stop because they are forced to make tradeoffs to stay within budget.   The good news? The pendulum is swinging back. A quiet architectural revolution is already underway. One that puts observability back inside your cloud, under your control. It’s called bring your own cloud (BYOC) and it’s redefining how telemetry is stored, processed, and paid for.   The Problem: Observability Got Too Expensive and Too Centralized   In the early days, sending all your telemetry to a SaaS platform felt like a superpower. Datadog, New Relic and ...

Secure Code Warrior AI Agent Applies Policies to AI Generated Code

Secure Code Warrior (SCW) this week added an artificial intelligence (AI) agent that both identifies code generated by an AI coding tool and automatically applies the appropriate governance policies. Company CEO Pieter Danhieux said the SCW Trust Agent makes it possible for DevSecOps teams to use AI to verify which AI models influenced specific commits, correlate that influence to vulnerability exposure, and take corrective action before insecure code is added to a production environment. DevSecOps teams can also use the AI agent to discover any Model Context Protocol (MCP) servers that might have been deployed without permission. Finally, SCW benchmark data can also be used to evaluate models and enforce approved AI usage policies based on measurable output, noted Danhieux. For example, a developer may be using one AI model to reduce costs without realizing they are also generating more vulnerabilities that would not otherwise be created if they relied on a different AI model. A...