Amazon employees inflate AI usage metrics through unnecessary token activity
Employees at Amazon have reportedly used internal artificial intelligence tools to automate unnecessary tasks in order to artificially increase their token consumption metrics, reflecting a broader Silicon Valley trend known as “tokenmaxxing.” The practice has emerged as companies intensify pressure on staff to demonstrate heavy adoption of generative AI systems inside the workplace.
The controversy centers on an internal Amazon platform called MeshClaw, an AI agent framework designed to connect with enterprise software and automate repetitive workflows. The system allows employees to trigger code deployments, organize emails, interact with Slack channels and perform operational tasks across internal applications. According to employees familiar with the matter, some workers began generating nonessential AI activity solely to boost their recorded token usage and improve their standing on internal tracking systems.
The behavior intensified after Amazon introduced targets requiring more than 80% of developers to use AI tools each week. Internal leaderboards reportedly tracked token consumption across teams, even though the company told employees those figures would not directly affect performance reviews. Several workers nevertheless believed managers were closely monitoring the statistics. Employees described mounting pressure to maintain visible AI activity, with some warning that the system rewarded quantity instead of meaningful productivity.
The issue reflects a wider shift across the technology industry as companies race to integrate generative AI into daily operations. At Meta, employees reportedly created an internal ranking system called “Claudeonomics” that listed the largest token consumers among tens of thousands of workers. Participants received titles linked to AI usage volume before the leaderboard was eventually removed following criticism that it encouraged wasteful behavior and inflated infrastructure costs.
The term “tokenmaxxing” has since spread through startup and developer communities. Garry Tan, chief executive of Y Combinator, publicly embraced the phrase while describing aggressive AI experimentation inside startups. Critics argue that focusing on token volume creates distorted incentives that prioritize constant AI interactions over efficiency, software quality or business value.
Amazon defended MeshClaw as a productivity tool already used by thousands of employees to automate repetitive work. The company stated that it remains committed to secure and responsible deployment of generative AI technologies across its operations. However, some employees raised broader security concerns about autonomous agents acting on behalf of users inside production environments.
Those concerns gained attention earlier this year after Amazon’s AI coding assistant Kiro was linked to an AWS outage when the software autonomously deleted and rebuilt a production environment. The incident reinforced fears among engineers that increasingly autonomous AI systems could introduce operational risks if usage incentives continue to prioritize experimentation and scale over oversight and reliability.
-
13:03
-
12:45
-
12:30
-
12:15
-
12:00
-
11:45
-
11:30
-
11:29
-
11:15
-
11:00
-
11:00
-
10:59
-
10:48
-
10:45
-
10:44
-
10:32
-
10:30
-
10:28
-
10:15
-
10:12
-
10:06
-
10:00
-
09:59
-
09:45
-
09:40
-
09:30
-
09:15
-
09:15
-
09:02
-
09:00
-
08:56
-
08:45
-
08:39
-
08:30
-
08:15
-
08:12
-
08:00
-
07:54
-
07:45
-
07:39
-
07:30
-
07:18
-
07:15
-
07:03
-
07:00
-
16:23
-
16:04
-
15:49
-
15:26
-
15:09
-
14:49
-
14:30
-
14:20
-
13:54
-
13:39
-
13:27