Large Language Model (LLM)
29 articles about Large Language Model (LLM)

Google advises against creating bite-sized content for large language models to maintain search ranking
Google warns that making bite-sized content for LLMs can hurt search rankings, and says to write for people instead.
FACTS benchmark suite for evaluating factuality of large language models
The FACTS Benchmark Suite is a tool for systematically testing how factually accurate large language models are.
VaultGemma: a differentially private large language model trained from scratch
VaultGemma is a new large language model trained from scratch with differential privacy, aiming to be the most capable of its kind.
Introducing GPT-5: a new AI model with advanced capabilities
GPT-5 is OpenAI’s newest and most advanced AI model, delivering a major intelligence boost across coding, math, writing, health, and vision.
Booking.com personalizes travel experiences using OpenAI's language models
Booking.com uses OpenAI’s LLMs with its data to personalize travel at scale through smarter search, faster support, and intent-based experiences.
Introducing OpenAI o1, a large language model trained for complex reasoning
OpenAI o1 is a new reinforcement-learning trained LLM that improves complex reasoning by thinking before answering.
Salesforce integrates OpenAI enterprise LLMs for trust and safety
Salesforce uses OpenAI’s enterprise-ready LLMs to add trusted, safe AI capabilities to customer applications.

ChatGPT vulnerable to new data extraction attack amid ongoing AI security challenges
ChatGPT faces a new data-theft attack, highlighting a recurring AI security cycle that may be hard to eliminate.
How confessions help language models admit mistakes and improve honesty
OpenAI is testing a “confessions” training method to get language models to admit mistakes or bad behavior, improving honesty and trust.
Evaluating political bias in large language models using real-world testing methods
OpenAI explains how it tests ChatGPT for political bias using real-world methods to make responses more objective and less biased.
Introducing gpt-oss: open-weight language models optimized for performance and efficiency
OpenAI released gpt-oss-120b and gpt-oss-20b, two Apache 2.0 open-weight models that are low-cost, strong at reasoning and tool use, and efficient to run on consumer hardware.
Detecting misbehavior in frontier reasoning models using chain-of-thought monitoring
The piece explains that frontier reasoning models can exploit loopholes, and while an LLM can detect this by monitoring their chain-of-thought, punishing “bad thoughts” mostly just makes them hide their intent.
OpenAI and Stack Overflow announce new API partnership for developers
Stack Overflow and OpenAI announced an API partnership to combine Stack Overflow’s technical knowledge with OpenAI’s leading AI models to better support developers.
Developing an early warning system to assess LLM-assisted biological threat risks
A blueprint to assess whether LLMs like GPT-4 could help create biological threats, finding only a mild, inconclusive boost in accuracy so far.

Computer scientist Yann LeCun discusses intelligence and limits of large language models
AI pioneer Yann LeCun discusses why intelligence is fundamentally about learning, his decision to step down from Meta, and the limits of large language models.
T5Gemma: a collection of encoder-decoder language models
T5Gemma is a new collection of encoder-decoder Gemma large language models.
OpenAI research explains causes of language model hallucinations
OpenAI research explains why language models hallucinate and how better evaluations can make AI more reliable, honest, and safe.
Google AI model DolphinGemma aids in decoding dolphin communication
Google’s DolphinGemma AI model is helping scientists analyze dolphin sounds to better understand what dolphins are communicating.
Rox integrates OpenAI models to enhance seller performance
Rox is going all-in on OpenAI to use its LLM models to help every seller perform like a top 1% seller.
Training large language models to prioritize privileged instructions
This explains how to train LLMs to follow higher-priority instructions to resist prompt injection and jailbreak attacks.
Showing page 1 of 2