Global Current News
  • News
  • Finance
  • Technology
  • Automotive
  • Energy
  • Cloud & Infrastructure
  • Data & Analytics
  • Cybersecurity
  • Public Safety
  • News
  • Finance
  • Technology
  • Automotive
  • Energy
  • Cloud & Infrastructure
  • Data & Analytics
  • Cybersecurity
  • Public Safety
No Result
View All Result
Global Current News
No Result
View All Result

OneFlip exploit alters one bit to backdoor AI models

by Edwin O.
August 30, 2025
in Cybersecurity
OneFlip attack

5G impact seen at $13.2T by 2035; 6G bigger

AI fuels surge in deepfake CEO scams

Open-source cybersecurity tools surge in adoption this month

George Mason University researchers have uncovered a next-generation inference-time backdoor attack named OneFlip, which forms implant stealth triggers by flipping a single bit on neural networks. OneFlip is entirely firm-at-inference, and uses Rowhammer-like memory fault injections, which adjust floating-point weights silently to obtain traditional-poisoned training data, allowing the network to be trained without needing the time-intensive training data poisoning common in traditional algorithms.

Revolutionary attack is the inference stage

In August 2025, an innovation was presented at the 34th USENIX Security Symposium by researchers at George Mason University, who introduced OneFlip, an inference-time backdoor attack, which flips a single bit within full-precision neural networks to put stealth triggers in place. In contrast to other more classical forms of backdoor approaches, where poisoning training data is needed or the training process is left to be interference with the designed model, OneFlip is fully inference time.

Through Rowhammer-style memory fault injections, OneFlip, under the quiet pretense, modifies a single floating-point weight in the final classification depth, and under such conditions, an adversary can manipulate model behavior without corrupting the training pipeline or even causing arousal during deployment. Since its launch, OneFlip has become a fundamental turning point in the sophistication of backdoor attacks, according to Cybersecurity News.

The way single-bit manipulation can impact maximally

Attacks based on prior inference required dozens or even hundreds of bit flips, which could be hard in practice because the exploitable DRAM cells are sparsely distributed. By smartly setting the most important bit of an exponent to zero and inverting one of the lower bits of the exponent, USENIX analysts determined that the attack could increase the value of the weight to overwhelm its classification neuron.

The use of the three-phase attack methodology guarantees stealth

The attack is carried out in three different phases to make the attack as effective as possible and yet invisible. Target Weight Identification algorithm. First, the Target Weight Identification algorithm searches the classification layer to find eligible weights that correspond to an IEEE 754 pattern, that is, positive numbers in [1,1] that can be represented by an exponent value because of having only one 0 bit above the sign bit.

In the next stage, the bi-objective gradient descent optimization technique is applied in Trigger Generation, which mutuates a minimum mask and a pixel pattern for generatingargining the neuron output that is selected (good) to suit the presence of a trigger. This absolute precision guarantees benign accuracies mortise to a degradation margin of less than 0.1% with an attack efficacy of up to 99.9% successful.

There is universal effectiveness where testing is comprehensive

The influence that OneFlip has can be achieved in a variety of different datasets and architectures as well, and exhibits amazing consistency and stealth capabilities. On CIFAR-10 with ResNet-18, benign accuracy seems nearly all the same at 0.01, but attack success comes in at 99.96 once a bit has been altered. The same findings apply to CIFAR-100, GTSRB, and ImageNet (convolutional and transformer models).

The infection mechanism discloses that it depends on the interaction of floating-point representation and DRAM fault susceptibility. The 32-bit weights are each formatted according to the IEEE 754 standard, i.e., one sign bit, eight exponent bits, and 23 mantissa bits. OneFlip determines a target weight, with exponent pattern 0xxxxxxx, by flipping a single non-MSB exponent bit that one bit.

OneFlip will constitute a new frontier in AI security threats and illustrate that the least hardware-level manipulations can pool maximum effects on neural network behavior. The capability of the attack to continue with near-perfect benign accuracy and its 99.9% success rates in case of single-bit modifications on the attack showcases a threat to existing security assumptions.

Global Current News

ยฉ 2025 by Global Current News

  • Contact
  • Legal notice

No Result
View All Result
  • News
  • Finance
  • Technology
  • Automotive
  • Energy
  • Cloud & Infrastructure
  • Data & Analytics
  • Cybersecurity
  • Public Safety

ยฉ 2025 by Global Current News