shortstartup.com
No Result
View All Result
  • Home
  • Business
  • Investing
  • Economy
  • Crypto News
    • Ethereum News
    • Bitcoin News
    • Ripple News
    • Altcoin News
    • Blockchain News
    • Litecoin News
  • AI
  • Stock Market
  • Personal Finance
  • Markets
    • Market Research
    • Market Analysis
  • Startups
  • Insurance
  • More
    • Real Estate
    • Forex
    • Fintech
No Result
View All Result
shortstartup.com
No Result
View All Result
Home AI

University of Michigan Researchers Propose G-ACT: A Scalable Machine Learning Framework to Steer Programming Language Bias in LLMs

University of Michigan Researchers Propose G-ACT: A Scalable Machine Learning Framework to Steer Programming Language Bias in LLMs
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


LLMs and the Need for Scientific Code Control

LLMs have rapidly evolved into complex natural language processors, enabling the development of agentic systems that manage complex workflows. However, the use of LLM agents for generating scientific code is unexplored. Scientific software primarily depends on C++, CUDA, and other low-level languages, which are underrepresented in most pretraining datasets. As a result, implementations generated by LLMs contain syntactic or semantic errors, which lead to compilation issues or unstable runtime behavior. Existing agents rely heavily on user-specified control primitives and carefully crafted prompts, which are prone to misinterpretation and can lead to erratic execution flows.

Limitations of Existing Steering Methods

Recent approaches have been developed to tackle LLM steering challenges by uncovering causal links within model activations and facilitating precise neuron-level interventions. SFT, weight modulation techniques, and RLHF represent direct intervention for model steering, but they have significant computational overhead and may reduce the model’s robustness and general performance. Activation Patching, which uses corrupted inputs as a baseline distribution, is widely adopted for fine-grained output control. However, these methods demand extensive model sweeps involving millions of evaluations and are used on multiple-choice question benchmarks, rather than real-world deployment scenarios.

Introduction of G-ACT Framework

Researchers from the University of Michigan have proposed a gradient-refined adaptive activation steering framework (G-ACT) to address the challenge of steering scientific code generation toward specific programming languages in LLMs. It arises from evaluating five causal LLMs on scientific coding prompts. G-ACT clusters per-prompt activation differences into steering directions and uses lightweight per-layer probes that are trained and refined online to select suitable steering vectors. The framework supports concept-level control while ensuring scalability and interpretability, providing a practical method for achieving reproducible behavior in agentic systems that require consistent programming language choices for scientific computing tasks.

Model Evaluation and Baseline Biases

Researchers evaluate five instruction-tuned LLMs, including Llama-3.2-3B-Instruct, Llama-3.3-70B-Instruct, Qwen2.5-Coder-32B-Instruct, Qwen2.5-14B-Instruct-1M, and QwQ-32B. Each model is tested on 84 benchmark questions with 25 repetitions per prompt at sampling temperature 1.0 to ensure statistical stability. Results for language preferences reveal that Llama-3.2-3B strongly defaults to Java (76.2%), while Llama-3.3-70B favors Python (73.8%). Qwen models show different biases with Qwen2.5-Coder preferring Python (59.5%) and Qwen2.5-14B favoring Julia (66.7%). These baseline measurements show that model scale, architectural design, and fine-tuning data collectively create reproducible biases.

Static Neuron Activation and Language Biasing

Static method analysis involves inducing language preference bias and code generation testing. Results for preference bias show that selective activation of individual MLP neurons in baseline tests with Llama-3.2-3B-Instruct gains strong causal control over programming language selection. When targeting CPP generation, results show nearly 100% CPP output across most problems, virtually eliminating Python, Java, and Julia outputs. Moreover, code generation testing reveals two distinct behavioral regimes: Python-leaning tasks show 40-80% Python outputs for high-level operations, while CPP-dominant tasks exhibit 60-90% CPP preference for performance-critical routines. The model achieves ~73% CPP generation more often than Python, but still defaults to Python for a significant portion of prompts.

Gradient-Refined Activation Steering Results

In this paper, researchers present a gradient-refined adaptive activation steering that can control programming language selection in scientific code generation. The framework achieves substantial improvements, increasing probe classification accuracy from 0% to 61.5% in early layers of LLaMA-3.2 3B. Despite a modest runtime overhead of 1.3-1.4 times slower generation, the framework remains practical through selective layer steering and caching optimizations. G-ACT offers a scalable and interpretable approach for concept-level control that goes beyond programming languages by embedding persistent transformation matrices. This ensures consistent model behavior across users and introduces a new standard for reliable LLM steering in scientific computing contexts.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

Sajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.



Source link

Tags: biasFrameworkGACTLanguageLearningLLMsmachineMichiganProgrammingProposeresearchersScalableSteerUniversity
Previous Post

Home equity to pay off car debt : personalfinance

Next Post

The World’s First Climate Visa

Next Post
The World’s First Climate Visa

The World’s First Climate Visa

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

shortstartup.com

Categories

  • AI
  • Altcoin News
  • Bitcoin News
  • Blockchain News
  • Business
  • Crypto News
  • Economy
  • Ethereum News
  • Fintech
  • Forex
  • Insurance
  • Investing
  • Litecoin News
  • Market Analysis
  • Market Research
  • Markets
  • Personal Finance
  • Real Estate
  • Ripple News
  • Startups
  • Stock Market
  • Uncategorized

Recent News

  • YAS, JOIE to deploy AI-driven insurance for Hong Kong taxis
  • Daily Earnings Up to $3,200
  • Are We Letting AI Code for Us — and Killing Our Skills? | by Bret Cameron | The Startup | Jun, 2025
  • Contact us
  • Cookie Privacy Policy
  • Disclaimer
  • DMCA
  • Home
  • Privacy Policy
  • Terms and Conditions

Copyright © 2024 Short Startup.
Short Startup is not responsible for the content of external sites.

No Result
View All Result
  • Home
  • Business
  • Investing
  • Economy
  • Crypto News
    • Ethereum News
    • Bitcoin News
    • Ripple News
    • Altcoin News
    • Blockchain News
    • Litecoin News
  • AI
  • Stock Market
  • Personal Finance
  • Markets
    • Market Research
    • Market Analysis
  • Startups
  • Insurance
  • More
    • Real Estate
    • Forex
    • Fintech

Copyright © 2024 Short Startup.
Short Startup is not responsible for the content of external sites.