Category: Information Security

Scattered Spider Attacks: Tips for SaaS Security

Scattered Spider Attacks: Tips for SaaS Security

As cloud adoption soars, threat groups like LUCR-3 Scattered Spider and Oktapus are mastering new ways to exploit identity management systems(IAMs), making these attacks more frequent and harder to detect. By targeting cloud environments and leveraging human vulnerabilities, LUCR-3 compromises identity providers (IDPs) and uses sophisticated techniques to breach organizations.

Before we begin, I wanted to present a random sampling of the successful attacks carried over by the LUCR-3 aka Scattered Spider.

Company/ProductDate AttackedCompromised SystemProjected LossMitigation Time
Telecom Company (Unnamed)December 2022Mobile Carrier Network, IDP SystemsEstimated millions in damagesSeveral weeks (ongoing)​CrowdStrike
Octa (Roasted Oktapus)March 2022Identity Provider (Okta) and SaaSPotential damage to ~366 companies4-5 weeks​HeroWikipedia
British TelecommunicationsJune 2022Mobile Carrier Systems, BPO NetworksMillions in lost revenue3-4 weeks​CrowdStrikeHero
Gaming Company (Unnamed)September 2022Cloud Infrastructure (SaaS and IaaS)Losses in IP theft (unconfirmed)~2 weeks​ISPM ITDR
Cloud Hosting ProviderNovember 2022AWS, Azure Environments, IAM SystemsIP theft and reputational damage3 weeks​CrowdStrike
MGM ResortsSeptember 2023Corporate systems, Help Desk, and IDPMillions in lost revenueSystems offline for weeks​Wikipedia
Caesars EntertainmentSeptember 2023Identity Providers (IDP) and SaaS~$30 million ransom paid​ Wikipedia~1 month recovery​Cyber Defense Magazine
Charter CommunicationsApril 2024Cloud-based systems (Okta phishing)Potentially millions in damages​ ResilienceSeveral weeks
NHS Hospitals (UK)June 2024VMware ESXi servers, critical healthcare systemsDisruption of hundreds of operations​BleepingComputerOngoing​BleepingComputer
Synnovis Pathology ServicesJune 2024Ransomware on pathology services systemsEstimated millions in healthcare disruptions​BleepingComputerOngoing investigation​BleepingComputer
This table provides a detailed overview of Scattered Spider’s recent attacks across industries, demonstrating their evolving tactics and widespread impact.

This article outlines the technical steps LUCR-3 typically follows, from initial access to persistence and lateral movement within cloud environments, mostly targeting SaaS platforms.

Step 1: Initial Access Through Identity Compromise

LUCR-3 starts with a core weakness in modern security—identity management. Their main attack vectors include:

  1. SIM Swapping: LUCR-3 hijacks a user’s phone number by tricking the telecom provider into assigning the number to a new SIM card. Once they have control over the phone number, they can intercept One-Time Passwords (OTP) sent via SMS.
  2. MFA Fatigue: The attackers flood the target with repeated MFA prompts, often overwhelming them into approving a malicious login request.
  3. Phishing and Social Engineering: They set up fake login pages for SaaS applications (e.g., SharePoint or OneDrive), capturing legitimate credentials and OTP codes.

These techniques allow LUCR-3 to bypass standard Multi-Factor Authentication (MFA) protections and gain access to cloud environments​

ISPM ITDR, Hero.

Step 2: Bypassing MFA and Establishing a Foothold

Once inside, LUCR-3 focuses on maintaining access to the compromised identity. This is done by modifying the victim’s MFA settings. Their tactics include:

  • Registering New Devices: LUCR-3 will register their own devices (phones or emails) under the victim’s account, which ensures they can log in without triggering alerts. For example, they might register an iPhone if the victim previously used Android, raising minimal suspicion.
  • Adding Alternate MFA Methods: They add backup MFA methods, such as an external email address, making it even harder to lock them out if the breach is discovered​ISPM ITDR.

Step 3: Reconnaissance and Data Collection in SaaS Environments

After gaining access to cloud platforms, LUCR-3 conducts extensive reconnaissance to identify critical assets, credentials, and sensitive information. Here’s how they do it:

  1. SaaS Platforms: They use native tools within platforms like SharePoint, OneDrive, and Salesforce to search for documents containing passwords, intellectual property, or financial data. They operate like legitimate users to avoid detection.
  2. AWS Cloud: In AWS environments, LUCR-3 navigates the AWS Management Console, targeting services like EC2 (Elastic Compute Cloud) and S3 (Simple Storage Service). They leverage the AWS-GatherSoftwareInventory job through Systems Manager (SSM) to list running software across EC2 instances​ ISPM ITDR.
  3. Privilege Escalation: LUCR-3 may modify IAM roles or escalate privileges by updating LoginProfiles or creating new access keys, ensuring they have continued administrative access​Hero.

Step 4: Lateral Movement and Persistence

LUCR-3 ensures they have multiple ways to re-enter a compromised environment, even if one of their entry points is discovered. Here’s how they achieve persistence:

  1. Create New IAM Users: LUCR-3 creates new user accounts that align with the naming conventions of the compromised environment to avoid suspicion. These accounts often have high-level access, allowing them to continue accessing the environment even after the initial breach is patched.
  2. Secrets Harvesting: Using tools like S3 Browser, LUCR-3 harvests credentials stored in AWS Secrets Manager and similar services, allowing them to steal sensitive data and further penetrate systems​ Hero.
  3. MFA Manipulation: They alter MFA settings to ensure continued access, often registering additional email addresses or devices that align with the compromised identity.

Step 5: Data Exfiltration and Extortion

Once LUCR-3 has gained the necessary access and gathered sensitive data, they execute their final stage of the attack, which often involves extortion. The data collected during their reconnaissance, such as customer information or proprietary code, is used as leverage to demand payment from the compromised organization​ The Hacker NewsISPM ITDR.

How to Detect and Prevent LUCR-3 Attacks

Given LUCR-3’s sophisticated techniques, organizations must adopt advanced security measures to detect and mitigate such attacks:

  • Monitor MFA Changes: Keep a close watch for unusual changes in MFA settings, such as new device registrations or changes from app-based authentication to SMS-based methods.
  • Audit Cloud Logs: Regularly audit cloud environments, especially IAM policy changes, new access key creation, and suspicious activity in management consoles.
  • Behavioral Anomaly Detection: Implement advanced behavioral monitoring to detect when legitimate accounts are being used in unusual ways, such as accessing unfamiliar services or using unfamiliar devices.

Conclusion

LUCR-3 (Scattered Spider) represents a new breed of cyber threat actors that rely on identity compromise rather than malware or brute force. By targeting the very foundation of security—identity—they can infiltrate cloud environments, move laterally, and exfiltrate data with relative ease. As organizations increasingly rely on cloud services, strengthening identity management, closely monitoring for anomalies, and responding quickly to suspicious behavior are critical defenses against such attacks.

References and Further Reading

  1. The Hacker News: Provides a detailed breakdown of LUCR-3’s identity-based attacks across cloud environments, lateral movement techniques, and persistence strategies.
  2. Permiso.io: Discusses how LUCR-3 targets identity infrastructure, modifies MFA settings, and maintains persistence in cloud environments like AWS and Azure.
  3. CrowdStrike: Offers insights into Scattered Spider’s use of the Bring-Your-Own-Vulnerable-Driver (BYOVD) technique and their focus on telecom and BPO sectors.
  4. Resilience Cyber Research: Highlights recent phishing campaigns by LUCR-3 in 2024, targeting industries such as telecom, food services, and tech, using Okta-based phishing tactics.
  5. EclecticIQ: Discusses LUCR-3’s involvement in ransomware attacks targeting cloud infrastructures within the insurance and financial sectors, leveraging smishing and phishing techniques.
  6. Wikipedia (Scattered Spider): Overview of the MGM Resorts hack in 2023, detailing how Scattered Spider gained access to internal systems through social engineering and caused significant disruptions.
  7. Cyber Defense Magazine: Discusses how LUCR-3 has highlighted vulnerabilities in MFA and cloud security, predicting more targeted attacks on SaaS and cloud service providers.
  8. BleepingComputer: Provides an overview of LUCR-3’s collaboration with ransomware groups like Qilin, targeting high-profile companies such as MGM Resorts and healthcare services.
  9. Caesars and MGM Hacking Incident: Outlines how Caesars Entertainment suffered a breach in September 2023, paying a ~$30 million ransom, while MGM Resorts experienced extensive downtime following a similar attack.
  10. Microsoft and Qilin Ransomware: Microsoft linked Scattered Spider to ransomware attacks using the Qilin variant, affecting companies like Synnovis Pathology and NHS hospitals in 2024. Read moreBleepingComputerWikipedia

These resources offer in-depth insights into the attack strategies and defence mechanisms relevant to LUCR-3 (Scattered Spider), perfect for anyone looking to deepen their understanding of identity-based attacks and cloud security.

How Will China’s Quantum Advances Change Internet Security?

How Will China’s Quantum Advances Change Internet Security?

Image Generated with Dalle 3

Introduction:

Chinese scientists have recently announced that they have successfully cracked military-grade encryption using a quantum computer with 372 qubits, a significant achievement that underscores the rapid evolution of quantum technology. This breakthrough has sparked concerns across global cybersecurity communities as RSA-2048 encryption—a widely regarded standard—was reportedly compromised. However, while this development signifies an important leap forward in quantum capabilities, its immediate implications are nuanced, particularly for everyday encryption protocols.

Drawing on technical insights from recent papers and analyses, this article delves deeper into the technological aspects of the breakthrough and explores why, despite this milestone, quantum computing still has limitations that prevent it from immediately threatening personal and business-level encryption.

The Quantum Breakthrough: Factoring RSA-2048

As reported by The Quantum Insider and South China Morning Post, the Chinese research team employed a 372-qubit quantum computer to crack RSA-2048 encryption, a cryptographic standard widely used to protect sensitive military information. RSA encryption relies on the difficulty of factoring large numbers, a task that classical computers would take thousands of years to solve. However, using quantum algorithms—specifically an enhanced version of Shor’s algorithm—the team demonstrated that quantum computers could break RSA-2048 in a much shorter time frame.

The breakthrough optimised Shor’s algorithm to function efficiently within the constraints of a 372-qubit machine. This marks a critical turning point in quantum computing, as it demonstrates the potential for quantum systems to tackle problems previously considered infeasible for classical systems. However, the paper from the Chinese Journal of Computers (2024) offers deeper insights into the quantum architecture and algorithmic refinements that made this breakthrough possible, highlighting both the computational power and limitations of the system.

Quantum Hardware and Algorithmic Optimisation

The technical aspects of the Chinese breakthrough, as detailed in the 2024 paper published in the Chinese Journal of Computers (CJC), emphasise the improvements in quantum hardware and algorithmic approaches that were key to this success. The paper outlines how the researchers enhanced Shor’s algorithm to mitigate the high error rates commonly associated with quantum computing, allowing for more stable computations over longer periods. This required optimising quantum gate operations, reducing quantum noise, and employing error-correction codes to preserve the integrity of qubit states.

Despite these improvements, the paper makes it clear that current quantum computers, including the 372-qubit machine used in this experiment, still suffer from several limitations. The system required an extremely controlled environment to maintain qubit coherence, and any deviation from ideal conditions would have introduced significant errors. Furthermore, the researchers faced challenges related to the scalability of the system, as error rates increase exponentially with the number of qubits involved. These limitations are consistent with the broader consensus in the field, as noted by Bill Buchanan and other experts, that practical quantum decryption on a global scale is not yet feasible.

The CJC paper also points out that while the breakthrough is impressive, it does not represent a complete realisation of quantum supremacy—the point at which quantum computers outperform classical computers across a wide range of tasks. The paper discusses the need for further advancements in quantum gate fidelity, qubit interconnectivity, and error correction to make quantum decryption scalable and applicable to broader, real-world encryption protocols.

Technical Analysis based on Li et al. (2024):

The paper explores two approaches for attacking RSA public key cryptography using quantum annealing:

1. Quantum Annealing for Combinatorial Optimization:

  • Method: This approach translates the mathematical attack method into a combinatorial optimization problem suited for the Ising model or QUBO model [1]. The Ising model represents a system of interacting spins, which can be mapped to the problem of factoring large integers used in RSA encryption.
  • Key Contribution: The paper proposes a high-level optimization model for multiplication tables and establishes a new dimensionality reduction formula. This formula reduces the number of qubits needed, thus saving resources and improving the stability of the Ising model [1]. The authors demonstrate this by successfully decomposing a two-million-level integer using a D-Wave Advantage system.
  • Comparison: This approach outperforms previous methods by universities and corporations like Purdue, Lockheed Martin, and Fujitsu [1]. This is achieved by significantly reducing the range of coefficients required in the Ising model, leading to a higher success rate in decomposition.
  • Focus: This technique represents a class of attack algorithms specifically designed for D-Wave quantum computers, known for their use of quantum annealing [1].

2. Quantum Annealing with Classical Methods:

  • Method: This approach combines the quantum annealing algorithm with established mathematical methods for cryptographic attacks, aiming to optimize attacks on specific cryptographic components [1]. It integrates the classical lattice reduction algorithm with the Schnorr algorithm.
  • Key Contribution: The authors leverage the quantum tunneling effect to adjust the rounding direction within the Babai algorithm, allowing for precise vector determination, a crucial step in the attack [1]. Quantum computing’s exponential acceleration capabilities address the challenge of calculating numerous rounded directions, essential for solving lattice problems [1]. Additionally, the paper proposes methods to improve search efficiency for close vectors, considering both qubit resources and time costs [1]. Notably, it demonstrates the first 50-bit integer decomposition on a D-Wave Advantage system, showcasing the algorithm’s versatility [1].
  • Comparison: The paper argues that D-Wave quantum annealing offers a more practical approach for smaller-scale attacks compared to Variational Quantum Algorithms (VQAs) on NISQ (Noisy Intermediate-Scale Quantum) computers. VQAs suffer from the “barren plateaus” problem, which can hinder algorithm convergence and limit effectiveness [1]. Quantum annealing is less susceptible to this limitation and offers an advantage when dealing with smaller-scale attacks.

Citations:

  1. Li, Gao, et al. “A Novel Quantum Annealing Attack on RSA Public Key Cryptosystems.” WC 2024 (2024).

Implications for Civilian Encryption: Limited Immediate Impact

While the Chinese breakthrough is undeniably significant, it is essential to recognise that the decryption of military-grade encryption does not immediately translate to vulnerabilities in civilian encryption protocols. Most personal and business communications rely on RSA-1024, elliptic-curve cryptography (ECC), or other lower-bit encryption systems. These systems remain secure against the capabilities of today’s quantum computers.

Moreover, as highlighted in the paper by Buchanan and echoed in the CJC analysis, many organisations are already transitioning towards post-quantum cryptography (PQC). PQC algorithms are specifically designed to withstand quantum attacks, ensuring that even as quantum computers advance, encryption systems will evolve to meet new threats.

Another key point raised by the CJC paper is that quantum decryption requires an immense amount of resources and computational power. The system used to break RSA-2048 involved highly specialised hardware and extensive computational time. Scaling such an operation to break everyday encryption protocols, such as those used in internet banking or personal communications, would require quantum computers with far more qubits and error-correction capabilities than are currently available.

Preparing for a Quantum Future: Post-Quantum Cryptography

As quantum computing technology evolves, it is imperative that governments, companies, and cybersecurity professionals continue preparing for the eventual reality of quantum decryption. This preparation includes developing and implementing post-quantum cryptographic solutions that are immune to quantum attacks. The National Institute of Standards and Technology (NIST) has already initiated efforts to standardise post-quantum cryptographic algorithms, which are designed to be secure against both classical and quantum attacks. The CJC paper underlines the importance of this transition and suggests that PQC will likely become the new standard in encryption over the next decade.

In addition to PQC, the CJC paper highlights the need for ongoing research into hybrid encryption systems, which combine classical cryptographic techniques with quantum-resistant methods. These hybrid systems could provide a transitional solution, allowing existing infrastructure to remain secure while fully quantum-resistant algorithms are developed and implemented.

Conclusion: A Scientific Milestone with Limited Immediate Consequences

The Chinese research team’s quantum decryption of military-grade encryption is a groundbreaking scientific achievement, signalling that quantum computing is rapidly advancing towards practical applications. However, as emphasised in the technical analyses from the Chinese Journal of Computers and other sources, this breakthrough is not yet a direct threat to civilian encryption systems. Current quantum computers remain limited by their error rates, scalability challenges, and the need for controlled environments, preventing widespread decryption capabilities.

As organisations and governments prepare for a post-quantum future, the adoption of post-quantum cryptography and hybrid systems will be crucial in ensuring that encryption protocols remain robust against both classical and quantum threats. While the breakthrough highlights the potential power of quantum computing, its impact on everyday encryption is still years, if not decades, away.

References and Further Reading

  1. Bill Buchanan, “A Major Advancement on Quantum Cracking,” Medium, 2024.
  2. The Quantum Insider, “Chinese Scientists Report Using Quantum Computer to Hack Military-Grade Encryption,” October 11, 2024.
  3. South China Morning Post, “Chinese Scientists Hack Military-Grade Encryption Using Quantum Computer,” October 2024.
  4. Interesting Engineering, “China’s Scientists Successfully Hack Military-Grade Encryption with Quantum Computer,” October 2024.
  5. Shor, P.W., “Algorithms for Quantum Computation: Discrete Logarithms and Factoring,” Proceedings of the 35th Annual Symposium on Foundations of Computer Science, 1994.
  6. National Institute of Standards and Technology (NIST), “Post-Quantum Cryptography: Current Status,” 2024.
  7. Chinese Journal of Computers, “Quantum Algorithmic Enhancements in Breaking RSA-2048 Encryption,” 2024.
Why Did Elastic Decide to Go Open Source Again?

Why Did Elastic Decide to Go Open Source Again?

Elastic’s Return to Open Source: The Knight is back to the Pavilion

Elastic, the company behind Elasticsearch, recently decided to revert to an open-source licensing model after four years of operating under a proprietary license. This decision reflects a shift in strategy that emphasizes community-driven innovation and collaboration. In 2019, Elastic initially adopted a proprietary model to protect its intellectual property from cloud providers like Amazon Web Services (AWS), which were benefiting from Elasticsearch without contributing to its development. However, the move away from open-source posed its own challenges, including alienating the developer community that had helped build Elasticsearch into a widely-used tool.

In 2024, Elastic CEO Shay Banon announced the company’s return to an open-source framework. He explained that this decision stems from the belief that open collaboration fosters innovation and better serves the long-term interests of both the company and its user base. “We believe that the best products are built together,” Banon stated, emphasizing the value of community engagement in product development.

Recent Changes in Open-Source Licensing Models

Elastic’s decision is not an isolated incident. Over the past few years, several other technology companies have reconsidered their licensing models in response to the changing dynamics of software development and cloud service providers. These companies have struggled with how to balance open-source principles with the need to protect their commercial interests.

  1. Redis Labs
    Redis Labs initially licensed Redis under a permissive open-source license, but in 2018, the company adopted the Commons Clause to prevent cloud providers from offering Redis as a service without contributing to its development. However, after facing backlash from the developer community, Redis Labs adjusted its approach by introducing Redis Stack under more community-friendly terms, highlighting the difficulty of maintaining open-source integrity while ensuring business protection.
  2. HashiCorp
    In 2023, HashiCorp, known for popular tools like Terraform, adopted a Business Source License (BSL), which restricts the usage of its software in certain commercial contexts. HashiCorp’s move was driven by concerns over cloud providers monetizing its tools without contributing back to the open-source community. While BSL is not a traditional open-source license, HashiCorp continues to maintain a balance between openness and protecting its intellectual property, showing how companies are navigating complex market dynamics.
  3. MongoDB
    MongoDB’s shift to the Server Side Public License (SSPL) in 2018 was another major development in the open-source licensing debate. The SSPL aims to prevent cloud service providers from exploiting MongoDB’s open-source code without contributing back. While the SSPL is more restrictive than traditional open-source licenses, MongoDB’s goal was to retain the open-source ethos while ensuring that cloud vendors could not commercialize the software without contributing to its development.
  4. Chef Software
    Chef, an automation tool provider, switched all of its products to open-source in 2019 after years of operating under a mixed licensing model. This shift was largely a response to the growing demand for transparency and community collaboration. Chef’s decision allowed it to rebuild trust within its user base and align its business strategy with the broader trends in software development.

Impact on the Average Software Developer

For the average software developer, these licensing model changes can profoundly impact their work, career growth, and day-to-day development practices.

  1. Access to Cutting-Edge Tools
    When companies like Elastic and MongoDB return to open-source models, developers gain unrestricted access to powerful tools and frameworks. This democratizes the technology, allowing developers from small companies, startups, and even personal projects to leverage the same tools that major enterprises use, without the barrier of expensive proprietary licenses. For many developers, open-source provides not just tools, but an entire ecosystem for experimentation, learning, and rapid prototyping.
  2. Contributing to Open-Source Communities
    Open-source contributions are an essential career-building tool for many developers. By contributing to open-source projects, developers can gain real-world experience, build portfolios, and even influence the direction of widely-used technologies. When companies like HashiCorp and Redis Labs shift their focus back to open-source, it increases opportunities for developers to become part of a larger, global development community.
  3. Career and Learning Opportunities
    Exposure to open-source projects allows developers to work with cutting-edge technology and methodologies. This can accelerate learning, as open-source projects are often evolving quickly with input from diverse and global teams. Additionally, contributing to popular open-source projects like Elastic or Kubernetes can greatly enhance a developer’s resume and open doors to career opportunities, including job offers and consulting roles.
  4. Navigating Licensing Restrictions
    Developers must also become more adept at navigating the complexities of new licenses like SSPL and BSL. These licenses place restrictions on how open-source software can be used, especially in cloud environments. Understanding the fine print is crucial for developers working in enterprise environments or launching their own SaaS products, as improper use of open-source software can lead to legal complications. This makes legal and compliance knowledge increasingly important in modern software development roles.

Open Source vs. Open Governance: A Crucial Distinction

Elastic’s journey highlights a key debate in the software development world: the difference between open source and open governance. While many companies have embraced open-source models, few have transitioned to open governance frameworks, which involve community-driven decision-making for the project’s future direction.

As highlighted in my previous article, “Open Source vs. Open Governance: The State and Future of the Movement,” the distinction lies in control. In open-source projects, the code is freely available, but decisions regarding the project’s roadmap and key developments may still be controlled by a single entity, such as a company. In contrast, open governance ensures that decision-making is decentralized, often involving multiple stakeholders, including developers, users, and companies that contribute to the project.

For Elastic and others, returning to open-source doesn’t necessarily mean embracing open governance. Although Elastic’s code will be open for contributions, the strategic direction will still be managed by the company. This is a common approach in many high-profile open-source projects. For example, Google’s Kubernetes operates under the open-source model but is governed by a diverse group of stakeholders, ensuring the project’s direction isn’t controlled by a single entity. On the other hand, projects like OpenStack follow a more open governance approach, with broader community involvement in decision-making.

Understanding the difference between open-source and open governance is critical as the software industry evolves. Companies are beginning to realize that open-source alone doesn’t always translate into the collaborative, community-driven development they seek. Open governance provides a framework for more inclusive decision-making, but it also presents challenges in terms of efficiency and control.

Looking Ahead: Open Source as a Business Strategy

The return of Elastic and other companies to more open models indicates a growing recognition of the importance of open-source in the software industry. For Elastic, this decision is about more than just licensing; it’s about reconnecting with a developer community that thrives on transparency and collaboration. By embracing open-source again, Elastic hopes to accelerate product development and foster stronger relationships with users.

This broader trend shows that while companies are still cautious about cloud providers exploiting their software, they are increasingly finding ways to leverage open-source models as a business strategy. These recent changes to licensing frameworks highlight the evolving nature of software development and the role open-source plays in it.

For organizations navigating the complex decision between proprietary and open-source models, the key lesson from Elastic’s experience is that the long-term benefits of community-driven development and innovation can outweigh the short-term protection of proprietary models. As more companies follow suit, it’s clear that open-source is not just a technical choice—it’s a business strategy.

Further Reading:

  1. Why Open Source Matters for Innovation – Alan Turing Institute
  2. The Future of Open Source: What to Expect in 2024 and Beyond – MIT Technology Review
  3. Why Every Company Should be Open-Source Aligned – Forbes

References:


WazirX Security Breach, What You Need to Know

WazirX Security Breach, What You Need to Know

Major Security Breach at WazirX: Key Details and How to Protect Yourself

In a shocking turn of events, WazirX, one of India’s premier cryptocurrency exchanges, has fallen victim to a massive security breach. The incident has not only raised alarm bells in the crypto community but also highlighted the pressing need for stringent security measures. Here’s a comprehensive look at the breach, its implications, and how you can safeguard your digital assets.

The WazirX Security Breach: What Happened?

In July 2024, WazirX confirmed a major security breach that resulted in hackers siphoning off approximately $10 million worth of various cryptocurrencies from user accounts. According to The Hacker News, the attackers exploited vulnerabilities in the exchange’s infrastructure, gaining unauthorized access to user data and funds. This incident is part of a broader trend of increasing cyberattacks on cryptocurrency platforms.

Additionally, Business Standard reported a suspicious transfer of $230 million just before the breach was discovered, raising further concerns about the internal security measures and the potential for insider involvement.

How Did the Hack Happen?

According to the preliminary report by WazirX, the breach involved a complex and coordinated attack on their multi-signature wallet infrastructure:

  1. Tampering with Transaction Ledger: The attackers managed to manipulate the transaction ledger, enabling unauthorized transactions. This tampering allowed fraudulent withdrawals that initially went unnoticed.
  2. Manipulating the User Interface (UI): The hackers exploited vulnerabilities in the user interface to conceal their activities. This manipulation misled both users and administrators by displaying incorrect balances and transaction histories.
  3. Collaboration with Liminal: WazirX worked closely with cybersecurity firm Liminal to investigate the breach. Liminal’s expertise was crucial in identifying the vulnerabilities and understanding the full scope of the attack.

The preliminary investigation indicated that there were no signs of a phishing attack or insider involvement. Instead, the breach was due to external manipulation of the transaction system and user interface.

Immediate Actions Taken by WazirX

Upon detecting the breach, WazirX swiftly implemented several measures to mitigate the damage:

  1. Containment: Affected systems were isolated to prevent further unauthorized access.
  2. User Notification: Users were promptly informed about the breach with advisories to change passwords and enable two-factor authentication (2FA).
  3. Investigation: WazirX is collaborating with top cybersecurity firms and law enforcement to investigate the breach and identify the culprits.
  4. Security Enhancements: Additional security measures, including enhanced encryption and stricter access controls, have been put in place.

According to Livemint, WazirX is working closely with global law enforcement agencies to recover the stolen assets and bring the perpetrators to justice. This breach follows a series of high-profile crypto scams and exchange failures, including the collapses of FTX and QuadrigaCX, which have collectively led to billions in losses for investors worldwide.

Implications for WazirX Users

The WazirX security breach has several critical implications:

  • Personal Data Exposure: Users’ personal information, including names, email addresses, and phone numbers, may be at risk.
  • Financial Loss: The breach has led to significant financial losses, although efforts are underway to recover the stolen funds.
  • Trust Issues: Such incidents can severely undermine user trust in cryptocurrency exchanges, emphasizing the need for robust security practices.

How to Protect Your Cryptocurrency Assets

In light of the WazirX security breach, here are some essential steps to protect your digital assets:

  1. Change Your Passwords: Update your WazirX password immediately and avoid using the same password across multiple platforms.
  2. Enable Two-Factor Authentication (2FA): Adding an extra layer of security can significantly reduce the risk of unauthorized access.
  3. Monitor Your Accounts: Regularly check your transaction history for any unusual activity and report suspicious transactions immediately.
  4. Beware of Phishing Attacks: Be cautious of emails or messages requesting personal information. Verify the source before responding.
  5. Use Hardware Wallets: For significant cryptocurrency holdings, consider using hardware wallets, which offer enhanced security against online threats.

The Future of Cryptocurrency Security

The WazirX breach is a wake-up call for the entire cryptocurrency industry. It underscores the necessity for continuous security upgrades and vigilant monitoring to protect users’ assets and maintain trust. As the industry evolves, exchanges must prioritize security to safeguard their platforms against increasingly sophisticated cyber threats.

Further Reading and References

Stay informed and vigilant to protect your investments in the ever-evolving world of cryptocurrencies. By taking proactive steps, you can enhance your digital security and navigate the market with confidence.

#WazirX #Cryptocurrency #SecurityBreach #CryptoHacking #BlockchainSecurity #DigitalAssets #CryptoSafety #WazirXHack #CryptocurrencySecurity

A Step-by-Step Guide to Implementing AttackGen for Improved Incident Response

A Step-by-Step Guide to Implementing AttackGen for Improved Incident Response

In the ever-evolving landscape of cybersecurity, preparing for potential incidents is crucial. One innovative tool making waves in this domain is AttackGen. Developed by Matthew Adams, who heads the Security for GenerativeAI at Citi, AttackGen is designed to generate tailored incident response scenarios. This cutting-edge tool leverages the power of large language models (LLMs) to generate customized incident response scenarios tailored to specific industries and company sizes. Whether you’re in Aerospace & Defense or FinTech or Healthcare, AttackGen offers invaluable training scenarios to enhance your cybersecurity incident response capabilities.

What is AttackGen?

AttackGen is a cybersecurity incident response testing tool designed to help organizations prepare for potential threats. By using LLMs, it creates realistic incident response scenarios based on the chosen industry and company size. For instance, it can generate scenarios for a “Large” company with 201-1,000 employees in the Aerospace & Defense sector. These tailored scenarios are essential for training cybersecurity incident responders, providing them with practical, industry-specific exercises.

How to Get Started with AttackGen

To start using AttackGen, follow these steps:

  1. Clone the Repository
    First, you’ll need to clone the AttackGen repository from GitHub. You can find it by searching for “AttackGen” or the profile of its creator, Matt Adams.
   git clone https://github.com/mrwadams/attackgen.git
  1. Navigate to the Directory
    Change into the newly created ‘attackgen’ directory.
    cd attackgen
  1. Install Requirements
    Install the necessary Python packages to run the tool.
   pip install -r requirements.txt
  1. Download MITRE ATT&CK Framework
    Download the latest version of the MITRE ATT&CK framework and place it in the “data” directory within the attackgen folder.
    Download MITRE ATT&CK Framework

5. Run the Application
Start the application using Streamlit.

   streamlit run 👋_Welcome.py

Using AttackGen

Once the application is up and running, open it in your preferred web browser. You’ll be greeted with the main page where you’ll need to enter your OpenAI API key. Also, for the record, AttackGen supports multiple LLMs, including the vaunted Mistral, Google AI, ollama and Azure OpenAI. After selecting your preferred models and entering your API key, follow these steps:

  1. Select Industry and Company Size
    Choose your company’s industry and size to tailor the incident response scenarios.
  2. Generate Scenario
    Click on “✨ Generate Scenario” to proceed.
  3. Choose Threat Actor Group
    On the next page, select a threat actor group and associated ATT&CK techniques.
  4. Download Scenario
    After generating the scenario, you can download it in Markdown format for use in your incident response training. It’s advisable to upload this scenario to your version control system promptly.

Visualizing Your Scenarios

For those interested in visualizing the Tactics, Techniques, and Procedures (TTPs) included in your scenarios, consider using the ATT&CK Navigator. This tool helps identify, highlight, and prioritize TTPs effectively. You can learn more about this in one of my previous posts on Analyzing and Visualizing Cyberattacks using Attack Flow.

Conclusion

AttackGen is a powerful tool for enhancing your incident response training by providing realistic, industry-specific scenarios. Kudos to Matt Adams for developing this innovative tool. For more insights and guides on cybersecurity, follow me as I continue to explore and share new tools and techniques every week. Your feedback is always welcome!


References and Further Reading:

Feel free to reach out with any questions or suggestions. Happy hunting! 🚀

Mastering Cyber Defense: The Impact Of AI & ML On Security Strategies

Mastering Cyber Defense: The Impact Of AI & ML On Security Strategies

The cybersecurity landscape is a relentless battlefield. Attackers are constantly innovating, churning out new threats at an alarming rate. Traditional security solutions are struggling to keep pace. But fear not, weary defenders! Artificial Intelligence (AI) and Machine Learning (ML) are emerging as powerful weapons in our arsenal, offering the potential to revolutionize cybersecurity.

The Numbers Don’t Lie: Why AI/ML Matters

  • Security Incidents on the Rise: According to the IBM Security X-Force Threat Intelligence Index 2023 https://www.ibm.com/reports/threat-intelligence, the average organization experienced 270 data breaches in 2022, a staggering 13% increase from the previous year.
  • Alert Fatigue is Real: Security analysts are bombarded with a constant stream of alerts, often leading to “alert fatigue” and missed critical threats. A study by the Ponemon Institute found that it takes an average of 280 days to identify and contain a security breach https://www.ponemon.org/.

AI/ML to the Rescue: Current Applications

AI and ML are already making a significant impact on cybersecurity:

  • Reverse Engineering Malware with Speed: AI can disassemble and analyze malicious code at lightning speed, uncovering its functionalities and vulnerabilities much faster than traditional methods. This allows defenders to understand attacker tactics and develop effective countermeasures before widespread damage occurs.
  • Prioritizing the Vulnerability Avalanche: Legacy vulnerability scanners often generate overwhelming lists of potential weaknesses. AI can prioritize these vulnerabilities based on exploitability and potential impact, allowing security teams to focus their efforts on the most critical issues first. A study by McAfee found that organizations can reduce the time to patch critical vulnerabilities by up to 70% using AI https://www.mcafee.com/blogs/internet-security/the-what-why-and-how-of-ai-and-threat-detection/.
  • Security SIEMs Get Smarter: Security Information and Event Management (SIEM) systems ingest vast amounts of security data. AI can analyze this data in real-time, correlating events and identifying potential threats with an accuracy far exceeding human capabilities. This significantly improves threat detection accuracy and reduces the time attackers have to operate undetected within a network.

The Future of AI/ML in Cybersecurity: A Glimpse Beyond

As AI and ML technologies mature, we can expect even more transformative applications:

  • Context is King: AI can be trained to understand the context of security events, considering user behaviour, network activity, and system configurations. This will enable highly sophisticated threat detection and prevention capabilities, automatically adapting to new situations and attacker tactics.
  • Automating Security Tasks: Imagine a future where AI automates not just vulnerability scanning, but also incident response, patch management, and even threat hunting. This would free up security teams to focus on more strategic initiatives and significantly improve overall security posture.

Challenges and Considerations: No Silver Bullet

While AI/ML offers immense potential, it’s important to acknowledge the challenges:

  • Explainability and Transparency: AI models can sometimes make decisions that are difficult for humans to understand. This lack of explainability can make it challenging to trust and audit AI-powered security systems. Security teams need to ensure they understand how AI systems reach conclusions and that these conclusions are aligned with overall security goals.
  • Data Quality and Bias: The effectiveness of AI/ML models heavily relies on the quality of the data they are trained on. Biased data can lead to biased models that might miss certain threats or flag legitimate activity as malicious. Security teams need to ensure their training data is diverse and unbiased to avoid perpetuating security blind spots.

The Takeaway: Embrace the Future

Security practitioners and engineers are at the forefront of adopting and shaping AI/ML solutions. By understanding the current applications, future potential, and the associated challenges, you can ensure that AI becomes a powerful ally in your cybersecurity arsenal. Embrace AI/ML, and together we can build a more secure future!

#AI #MachineLearning #Cybersecurity #ThreatDetection #SecurityAutomation

P.S. Check out these resources to learn more:

NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0): https://www.nist.gov/itl/ai-risk-management-framework) by National Institute of Standards and Technology (NIST)

Understanding The Implications Of The Data Breaches At Microsoft.

Understanding The Implications Of The Data Breaches At Microsoft.

Note: I started this article last weekend to try and explain the attack path  “Midnight Blizzard” used and what Azure admins should do to protect themselves from a similar attack. Unfortunately, I couldn't complete/publish it in time and now there is another breach at Microsoft. (🤦🏿) Now, I had to completely redraft it and change the focus to a summary of data breaches at Microsoft and a walkthrough on the current breach. I will publish the Midnight Blizzard defence later this week.
Microsoft Data Breach

The Timeline of the Breaches

  • 20th-25th September 2023: 60k State Department Emails Stolen in Microsoft Breach
  • 12th-25th January 2024: Microsoft breached by “Nation-State Actors”
  • 11th-14th February 2024: State-backed APTs are weaponising OpenAI models 
  • 16th-19th February 2024: Microsoft admits to security issues with Azure and Exchange servers.
Date/MonthBreach TypeAffected Service/AreaSource
February 2024Zero-day vulnerabilities in Exchange serversExchange serversMicrosoft Security Response Center blog
January 2024Nation State-sponsored attack (Russia)Email accountsMicrosoft Security Response Center blog
February 2024State-backed APTs are weaponising OpenAI modelsNot directly impacting MS services
July 2023Chinese Hackers Breach U.S. Agencies Via Microsoft CloudAzureThe New York Times, Microsoft Security Response Center blog
October 2022BlueBleed Data Leak, 0.5 Million user data leakedUser Data
December 2021Lapsus$ intrusionSource code (Bing, Cortana)The Guardian, Reuters
August 2021Hafnium attacks Exchange serversExchange serversMicrosoft Security Response Center blog
March 2021SolarWinds supply chain attackVarious Microsoft products (indirectly affected)The New York Times, Reuters
January 2020Misconfigured customer support databaseCustomer data (names, email addresses)ZDNet
This is a high-level summary of breaches and successful hacks that got reported in the public domain and picked up by tier 1 publications. There are at least a dozen more in the period, some are of negligible impact, and others are less probable

Introduction:

Today, The digital landscape is a battlefield, and even tech giants like Microsoft aren’t immune to cyberattacks. Understanding recent breaches/incidents and their root causes, and effective defence strategies is crucial for Infosec/IT and DevSecOp teams navigating this ever-evolving threat landscape. This blog post dives into the security incidents affecting Microsoft, analyzes potential attack paths, and equips you with actionable defence plans to fortify your infrastructure/network.

Selected Breaches:

  • January 2024: State actors, purported to be affiliated with Russia leveraged password spraying and compromised email accounts, including those of senior leadership. This highlights the vulnerability of weak passwords and the critical need for multi-factor authentication (MFA).
  • January 2024: Zero-day vulnerabilities in Exchange servers allowed attackers to escalate privileges. This emphasizes the importance of regular patching and prompt updates to address vulnerabilities before they’re exploited.
  • December 2021: Lapsus$ group gained access to source code due to misconfigured access controls. This underscores the importance of least-privilege access and regularly reviewed security configurations.
  • Other incidents: Supply chain attacks (SolarWinds, March 2021) and data leaks (customer database, January 2020) demonstrate the diverse threats organizations face.

Attack Paths:

Understanding attacker motivations and methods is key to building effective defences. Here are common attack paths:

  • Social Engineering: Phishing emails and deceptive tactics trick users into revealing sensitive information or clicking malicious links.
  • Software Vulnerabilities: Unpatched software with known vulnerabilities offers attackers an easy entry point.
  • Weak Passwords: Simple passwords are easily cracked, granting access to accounts and systems.
  • Misconfigured Access Controls: Overly permissive access rules give attackers more power than necessary to escalate privileges and cause damage.
  • Supply Chain Attacks: Compromising a vendor or partner can grant attackers access to multiple organizations within the supply chain.

Defence Plans:

Building a robust defense requires a multi-layered approach:

  • Patch Management: Prioritize timely patching of vulnerabilities across all systems and software.
  • Strong Passwords & MFA: Implement strong password policies and enforce MFA for all accounts.
  • Access Control Management: Implement least privilege access and regularly review configurations.
  • Security Awareness Training: Educate employees on phishing, social engineering, and secure password practices.
  • Threat Detection & Response: Deploy security tools to monitor systems for suspicious activity and respond promptly to incidents.
  • Incident Response Planning: Develop and test a plan to mitigate damage, contain breaches, and recover quickly.
  • Penetration Testing: Regularly test your defenses by simulating real-world attacks to identify and fix vulnerabilities before attackers do.
  • Network Segmentation: Segment your network to limit the potential impact of a breach by restricting access to critical systems.
  • Data Backups & Disaster Recovery: Regularly back up data and have a plan to restore it in case of an attack or outage.
  • Stay Informed: Keep up-to-date on the latest security threats and vulnerabilities by subscribing to security advisories and attending industry conferences.

Conclusion:

Cybersecurity is an ongoing battle, but by understanding the tactics employed by attackers and implementing these defence strategies, IT/DevOps admins can significantly reduce the risk of breaches and protect their networks and data. Remember, vigilance and continuous improvement are key to staying ahead of the curve in the ever-evolving cybersecurity landscape.

Disclaimer: This blog post is for informational purposes only and should not be considered professional security advice. Please consult with a qualified security professional for guidance specific to your organization or mail me for an obligation free consultation call.

References and Further Reading:

What Makes SM2 Encryption Special? China’s Recommended Algorithm

What Makes SM2 Encryption Special? China’s Recommended Algorithm

This article is intended for security enthusiasts or otherwise for people with an advanced understanding of Cryptography and some Programming. I have tried to give in some background theory a very basic implementation.

Are there backdoors in AES and what is China’s response to it?

The US NIST has been pushing AES as the standard for symmetric key encryption. However, many luminaries in cryptographic research and industry observers suspect that as possibly pushing a cipher with an NSA/ GCHQ backdoor. For Chinese entities (Government or commercial), the ShāngMì (SM) series of ciphers provide alternatives. The SM9 standards provide a family of algorithms which will perform the entire gamut of things that RSA or AES is expected to do. They include the following.

SM4 was developed by Lü Shuwang in 2007 and became a national standard (GB/T 32907–2016) in 2016 [RFC 8998].

Elliptic Curve Cryptography (ECC)

ECC is one of the most prevalent approaches to public-key cryptography, along with Diffie–Hellman, RSA & YAK

Public-key Cryptography

Public-key cryptography relies on the generation of two keys:

  • one private key which must remain private
  • one public key which can be shared with the world

It is impossible to know a private key from a public key (it takes more than centuries to compute – assuming a workable quantum computer is infeasible using existing material science). It is possible to prove the possession of a private key without disclosing it. This proof can be verified by using its corresponding public key. This proof is called a digital signature.

High-level Functions

ECC can perform signature and verification of messages (authenticity). ECC can also perform encryption and decryption (confidentiality), however, not directly. For encryption/decryption, it needs the help of a shared secret aka Key.

It achieves the same level of security as RSA (Rivest-Shamir-Adleman), the traditional public-key algorithm, using substantially shorter key sizes. This reduction translates into lower processing requirements and reduced storage demands. For instance, an ECC 256-bit key provides comparable security to an RSA 3072-bit key.

For brevity’s sake, I’d refer you to Hans Knutson’s very well-explained article on Hacker Noon

Theory Summary: A Look Inside SM2 Key Generation

This section aims to offer a simplified understanding of different parameters found in SM2 libraries and their corresponding meanings, drawing inspiration from the insightful guides by Hans Knutson on Hacker Noon and Svetlin Nakov’s CryptoBook. (links in the reference section)

Comparing RSA and ECC Key Generation:

  • RSA: Based on prime number factorization.
    • Private key: Composed of two large prime numbers (p and q).
    • Public key: Modulus (m) obtained by multiplying p and q (m = p * q).
    • Key size: Determined by the number of bits in modulus (m).
    • Difficulty: Decomposing m back into p and q is computationally intensive.
  • ECC: Leverages the discrete logarithm of elliptic curve elements.
    • Elliptic curve: Defined as the set of points (x, y) satisfying the equation y^2 = x^3 + ax + b.
    • Example: Bitcoin uses the curve secp256k1 with the equation y^2 = x^3 + 7.
    • Point addition: Defined operation on points of the curve.

Key Generation in SM2:

  1. Domain parameters:
    • A prime field p of 256 bits.
    • An elliptic curve E defined within the field p.
    • A base point G on the curve E.
    • Order n of G, representing the number of points in the subgroup generated by G.
  2. Private key:
    • Randomly chosen integer d (1 < d < n).
  3. Public key:
    • Point Q = d * G.

Understanding Parameters:

  • Prime field p: Defines the mathematical space where the curve operates.
  • Elliptic curve E: Provides a structure for performing cryptographic operations.
  • Base point G: Serves as a starting point for generating other points on the curve.
  • Order n: Represents the number of points in the subgroup generated by G, which dictates the security level of the scheme.
  • Private key d: Secret integer randomly chosen within a specific range.
  • Public key Q: Point obtained by multiplying the private key d with the base point G.

Visualization:

Imagine a garden with flowers planted on specific points (x, y) satisfying a unique equation. This garden represents the elliptic curve E. You have a special key (d) that allows you to move around the garden and reach a specific flower (Q) using a defined path. Each step on this path is determined by the base point G. While anyone can see the flower (Q), only you have the knowledge of the path (d) leading to it, thus maintaining confidentiality.

This analogy provides a simplified picture of key generation in SM2, illustrating the interplay between different parameters and their cryptographic significance.

Diving Deeper into SM2/SM3/SM4 Integration with Golang

This section focuses on the integration of the Chinese cryptographic standards SM2, SM3, and SM4 into Golang applications. It details the process of porting Java code to Golang and the specific challenges encountered.

Open-Source Implementations:

  • GmSSL: Main open-source implementation of SM2/SM3/SM4, stands for “Guomi.”
  • Other implementations: gmsm (Golang), gmssl (Python), CFCA SADK (Java).

Porting Java Code to Golang:

  • Goal: Reverse-engineer the usage of CFCA SADK in Java code and adapt the corresponding functionality in Golang using gmsm.
  • Approach:
    • Hashing (SM3) and encryption (SM4) algorithms were directly ported using equivalent functions across languages.
    • Security operations added to a classic REST API POST required specific attention.
    • Step 1:
      • Original parameters are concatenated in alphabetical order.
      • API key is appended.
      • The combined string is hashed using SM3.
      • The resulting hash is added as an additional POST parameter.
    • Step 2:
      • Original parameters are concatenated in alphabetical order.
      • The signature is generated using SM2.
      • Challenge: Golang library lacked PKCS7 formatting support for signatures, only supporting American standards.
      • Solution: Modification of the Golang library to support PKCS7 formatting for SM2 signatures.

Response Processing:

  • Response body is encrypted using SM4 with a key derived from the API key.
  • Response body includes both an SM3 hash and SM2 signature for verification.

Key Takeaways:

  • Porting cryptographic algorithms across languages requires careful consideration of specific functionalities.
  • Lack of standard support for specific formats (PKCS7 in this case) might necessitate library modification.
  • Integrating SM2/SM3/SM4 in Golang requires utilizing libraries like gmsm and potentially adapting them for specific needs.

Getting your Hands Dirty

Go to https://github.com/guanzhi/GmSSL/releases download the version for your OS and move to your working directory.

1 - $ unzip or tar -xvf GmSSL-master.zip/tar
2 - $ mkdir build
    $ cd build
    $ cmake ..
    $ make
    $ make test
    $ sudo make install
3 - $ gmssl version
    $ GmSSL 3.1.0 Dev
4 -
$ KEY=11223344556677881122334455667788
$ IV=11223344556677881122334455667788

$ echo hello | gmssl sm4 -cbc -encrypt -key $KEY -iv $IV -out sm4.cbc
$ gmssl sm4 -cbc -decrypt -key $KEY -iv $IV -in sm4.cbc

$ echo hello | gmssl sm4 -ctr -encrypt -key $KEY -iv $IV -out sm4.ctr
$ gmssl sm4 -ctr -decrypt -key $KEY -iv $IV -in sm4.ctr

$ echo -n abc | gmssl sm3
$ gmssl sm2keygen -pass 1234 -out sm2.pem -pubout sm2pub.pem
$ echo -n abc | gmssl sm3 -pubkey sm2pub.pem -id 1234567812345678
$ echo -n abc | gmssl sm3hmac -key 11223344556677881122334455667788

$ gmssl sm2keygen -pass 1234 -out sm2.pem -pubout sm2pub.pem

$ echo hello | gmssl sm2sign -key sm2.pem -pass 1234 -out sm2.sig #-id 1234567812345678
$ echo hello | gmssl sm2verify -pubkey sm2pub.pem -sig sm2.sig -id 1234567812345678

$ echo hello | gmssl sm2encrypt -pubkey sm2pub.pem -out sm2.der
$ gmssl sm2decrypt -key sm2.pem -pass 1234 -in sm2.der

$ gmssl sm2keygen -pass 1234 -out sm2.pem -pubout sm2pub.pem

$ echo hello | gmssl sm2encrypt -pubkey sm2pub.pem -out sm2.der
$ gmssl sm2decrypt -key sm2.pem -pass 1234 -in sm2.der

$ gmssl sm2keygen -pass 1234 -out rootcakey.pem
$ gmssl certgen -C CN -ST Beijing -L Haidian -O PKU -OU CS -CN ROOTCA -days 3650 -key rootcakey.pem -pass 1234 -out rootcacert.pem -key_usage keyCertSign -key_usage cRLSign
$ gmssl certparse -in rootcacert.pem

How to Get Keys

The private key used for SM2 signing was provided to us, along with a passphrase for testing purposes. Of course, in production systems, the private key is generated and kept private. The file extension is .sm2; the first step was to make use of it.

It can be parsed with:

$ openssl asn1parse -in file.sm2

    0:d=0  hl=4 l= 802 cons: SEQUENCE
    4:d=1  hl=2 l=   1 prim: INTEGER           :01
    7:d=1  hl=2 l=  71 cons: SEQUENCE
    9:d=2  hl=2 l=  10 prim: OBJECT            :1.2.156.10197.6.1.4.2.1
   21:d=2  hl=2 l=   7 prim: OBJECT            :1.2.156.10197.1.104
   30:d=2  hl=2 l=  48 prim: OCTET STRING      [HEX DUMP]:8[redacted]7
   80:d=1  hl=4 l= 722 cons: SEQUENCE
   84:d=2  hl=2 l=  10 prim: OBJECT            :1.2.156.10197.6.1.4.2.1
   96:d=2  hl=4 l= 706 prim: OCTET STRING      [HEX DUMP]:308[redacted]249

The OID 1.2.156.10197.1.104 means SM4 Block Cipher. The OID 1.2.156.10197.6.1.4.2.1 simply means data.

.sm2 files are an ASN.1 structure encoded in DER and base64-ed. The ASN.1 structure contains (int, seq1, seq2). Seq1 contains the SM4-encrypted SM2 private key x. Seq2 contains the x509 cert of the corresponding SM2 public key (ECC coordinates (x,y) of the point X). From the private key x, it is also possible to get X=x•P.

The x509 certificate is signed by CFCA, and the signature algorithm 1.2.156.10197.1.501 means SM2 Signing with SM3.

How to Sign with SM2

Now that the private key x is known, it is possible to use it to sign the concatenation of parameters and return the PKCS7 format expected.

As a reminder, ECC Digital Signature Algorithm takes a random number k. This is why it is important to add a random generator to the signing function. It is also difficult to troubleshoot: signing the same message twice will provide different outputs.

The signature will return two integers, r and s, as defined previously.

The format returned is PKCS7, which is structured with ASN.1. The asn1js tool is perfect for reading and comparing ASN.1 structures. For maximum privacy, it should be cloned and used locally.

The ASN.1 structure of the signature will follow:

  • The algorithm used as hash, namely 1.2.156.10197.1.401 (sm3Hash)
  • The data that is signed, with OID 1.2.156.10197.6.1.4.2.1 (data)
  • A sequence of the x509 certificates corresponding to the private keys used to sign (we can sign with multiple keys)
  • A set of the digital signatures for all the keys/certificates signing. Each signature is a sequence of the corresponding certificate information (countryName, organizationName, commonName) and finally the two integer r and s, in hexadecimal representation

To generate such signature, the Golang equivalent is:

import (
	"math/big"
	"encoding/hex"
	"encoding/base64"
	"crypto/rand"
	"github.com/tjfoc/gmsm/sm2"
	"github.com/pgaulon/gmsm/x509" // modified PKCS7
)

[...]

	PRIVATE, _ := hex.DecodeString("somehexhere")
	PUBLICX, _ := hex.DecodeString("6de24a97f67c0c8424d993f42854f9003bde6997ed8726335f8d300c34be8321")
	PUBLICY, _ := hex.DecodeString("b177aeb12930141f02aed9f97b70b5a7c82a63d294787a15a6944b591ae74469")

	priv := new(sm2.PrivateKey)
	priv.D = new(big.Int).SetBytes(PRIVATE)
	priv.PublicKey.X = new(big.Int).SetBytes(PUBLICX)
	priv.PublicKey.Y = new(big.Int).SetBytes(PUBLICY)
	priv.PublicKey.Curve = sm2.P256Sm2()

	cert := getCertFromSM2(sm2CertPath) // utility to provision a x509 object from the .sm2 file data
	sign, _ := priv.Sign(rand.Reader, []byte(toSign), nil)
	signedData, _ := x509.NewSignedData([]byte(toSign))
	signerInfoConf := x509.SignerInfoConfig{}
	signedData.AddSigner(cert, priv, signerInfoConf, sign)
	pkcs7SignedBytes, _ := signedData.Finish()
	return base64.StdEncoding.EncodeToString(pkcs7SignedBytes)

Key Takeaways: Demystifying SM2 Cryptography

  1. SM2 relies on Elliptic Curve Cryptography (ECC): This advanced mathematical method provides superior security compared to traditional RSA algorithms.
  2. ECC keys are unique: The public key is a point reached by repeatedly adding the base point to itself a specific number of times. This number acts as the private key and remains secret.
  3. ECC signatures are dynamic: Unlike static signatures, ECC signatures use a random element, ensuring they vary even for the same message. Each signature consists of two unique values (r and s).
  4. Troubleshooting tools: ASN.1 issues can be tackled with asn1js, while Java problems can be identified using jdb and jd-gui.
  5. Cryptography requires expertise: Understanding and implementing cryptographic algorithms like SM2 demands specialized knowledge and careful attention.

References & Further Reading:

  1. Elliptic Curve Cryptography (ECC) 
  2. What is the math behind elliptic curve cryptography? | HackerNoon 
  3. Releases · guanzhi/GmSSL
How to select SSO Standard for your SaaS Application.

How to select SSO Standard for your SaaS Application.

For anyone developing any application on the cloud, the major concern is always how is security implemented. Typically, you start with an authentication system viz. Usernames & Passwords. As your application grows in size of use cases and adoption, you’ll soon find a necessity to improve your security posture, these could range from MFA, Federated Identity management and finally authorisation. You now have customers who ask if you can support their AD authorisation or OneLogin or Okta etc. 

This is when you’ll think about implementing a Single-Sign-On. But, the choice of how to keep data and identities secure begins much earlier for software architects and developers: selecting the standard that should be used to keep federated identities safe. This will involve two things, architecting an authorisation system – could be a separate service or bound with your application – this choice is critical to how you can grow as an organisation. 

Architecture Choice:

If you choose to integrate it with your main product and 2 months later your board directs you to develop a new offering, you’ll end up doing it all over again. On the contrary, if you’re not going to pivot to any new business line, the additional time you will incur in building an external “Accounts service” will be a tax on the GTM. 

Standards Choice:

IT Administrators and Security Architects must first choose the protocol or framework to use to maintain federated identity, or the mechanism of connecting a person’s electronic identity and attributes, safe while designing a plan to keep data and identities secure.

A Single Sign-On (SSO) account has the advantage of allowing employees to log in once to an application or network and not have to log in to several apps or networks during the workday. While this is beneficial to employees in terms of increasing productivity by eliminating the need to remember several passwords, it is also beneficial to IT and Security functions. The Identity and Access Management (IAM) platform responsible for maintaining employees’ credentials can assist make it more manageable by registering fewer passwords in the system.

It is, however, not an easy choice. Security Assertion Markup Language (SAML), OpenID, and open authorization are the leading candidates in the federation process (OAuth). Let’s take a closer look at these technologies and determine when SAML, OAuth, and OpenID should be used.

What is Single Sign-On (SSO)?

SSO (Single Sign-On) is an authentication method that allows apps to validate users by using other trustworthy apps. Single sign-on allows a user to use a single ID and password to log into several applications.

SSO is an important part of an Identity and Access Management (IAM) platform for managing access. User identity verification is crucial for establishing what permissions a user will have.

SSO Standards

  • SAML

SAML is a protocol that allows an Identity Provider (IdP) to send a user’s credentials to a service provider for authentication and authorization. SAML allows for Single Sign-On (SSO) and streamlines password management. It is beneficial to businesses because employees are using an increasing number of applications to complete their tasks.

Keeping track of passwords for hundreds of programs used by hundreds, if not thousands, of employees can be difficult. SAML comes to the rescue by providing a single sign-on standard for businesses.

  • OAuth 

OAuth 2.0 is a secure authorization standard. It allows secure delegated access by providing third-party services with access tokens rather than exposing user credentials. It does not, however, authenticate; it just authorizes.

You’ve probably used OAuth 2.0 if you’ve ever signed up for a new app and consented to allow it automatically source fresh contacts from Facebook or your phone contacts. This standard ensures that delegated access is secure. This means that a program can operate on behalf of a user and access resources from a server without the user needing to provide their credentials. This is accomplished by allowing the Identity Provider (IdP) to issue tokens to third-party apps with the user’s permission.

  • OpenID

The OpenID Connect (OIDC) standard is used for authentication. OIDC is used by identity providers (those who generate and administer identities) so that users can log in with their IdP first and then access applications without having to re-enter their credentials.

This authentication option is recognizable if you’ve used your Google account to sign in to apps like YouTube or Facebook to log into an online shopping cart. Organizations use OpenID Connect to authenticate users, and it is an open standard. This is used by IdPs so that users can sign in to the IdP and then use their sign-in information to access other websites and apps without having to log in or disclose their sign-in information.

SAML VS OAuth VS OpenID

OAuth 2.0 is a framework for regulating authorization to a protected resource, such as a program or a set of files, whereas OpenID Connect and SAML are both federated authentication industry standards. As a result, OAuth 2.0 is used in quite different situations than the other two protocols, and it can be used in conjunction with either OpenID Connect or SAML.

OpenID Connect is based on the OAuth 2.0 protocol and uses an ID token, which is a JSON Web Token (JWT) that standardizes areas where OAuth 2.0 provides for flexibility, such as scopes and endpoint discovery. It depends on user authentication and is often used to make user logins easier on consumer websites and mobile apps.

Unlike JWT, SAML does not rely on OAuth and instead relies on a message exchange to authenticate in the XML SAML format. It’s more commonly used in enterprise settings to allow users to log in to several applications with a single password.

Final Thoughts

As technology advances and systems become more interconnected, federated identification becomes increasingly useful since it is more convenient for users. It saves them time by reducing the number of accounts and passwords they have to remember, but it raises some security concerns.

SAML has one feature that OAuth2 lacks: the SAML token contains the user identity information (because of signing). With OAuth2, you don’t get that out of the box, and instead, the Resource Server needs to make an additional round trip to validate the token with the Authorization Server.

On the other hand, with OAuth2 you can invalidate an access token on the Authorization Server, and disable it from further access to the Resource Server.

SAML provides a simpler and more standardized solution which covers all of our current and projected needs at ITILITE and avoids the use of workarounds for interoperability with native applications.

Bitnami