Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Collapse
Brand Logo
UDS UDS: $1.87
24h: 8.92%
Trade UDS
Gate.io
Gate.io
UDS / USDT
MEXC
MEXC
UDS / USDT
WEEX
WEEX
UDS / USDT
COINSTORE
COINSTORE
UDS / USDT
Biconomy.com
Biconomy.com
UDS / USDT
BingX
BingX
UDS / USDT
XT.COM
XT.COM
UDS / USDT
Uniswap v3
Uniswap v3
UDS / USDT
PancakeSwap v3
PancakeSwap v3
UDS / USDT

Earn up to 50 UDS per post

Post in Forum to earn rewards!

Learn more
UDS Right

Spin your Wheel of Fortune!

Earn or purchase spins to test your luck. Spin the Wheel of Fortune and win amazing prizes!

Spin now
Wheel of Fortune
selector
wheel
Spin

Paired Staking

Stake $UDS
APR icon Earn up to 50% APR
NFT icon Boost earnings with NFTs
Earn icon Play, HODL & earn more
Stake $UDS
Stake $UDS
UDS Left

Buy UDS!

Buy UDS with popular exchanges! Make purchases and claim rewards!

Buy UDS
UDS Right

Post in Forum to earn rewards!

UDS Rewards
Rewards for UDS holders
Rewards for UDS holders (per post)*
  • 100 - 999 UDS: 0.05 UDS
  • 1000 - 2499 UDS: 0.10 UDS
  • 2500 - 4999 UDS: 0.5 UDS
  • 5000 - 9999 UDS: 1.5 UDS
  • 10000 - 24999 UDS: 5 UDS
  • 25000 - 49999 UDS: 10 UDS
  • 50000 - 99 999 UDS: 25 UDS
  • 100 000 UDS or more: 50 UDS
*

Rewards are credited at the end of the day. Limited to 5 payable posts per day, 50 K holders - 3 posts per day, 100K holders - 2 posts per day. Staked UDS gives additional coefficient up to X1.5

  1. Home
  2. Beyond Blockchain
  3. 🧠 Study: Threats and $1B Bribes Don’t Make AI Smarter (Well… Almost)

🧠 Study: Threats and $1B Bribes Don’t Make AI Smarter (Well… Almost)

Scheduled Pinned Locked Moved Beyond Blockchain
6 Posts 6 Posters 30 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
This topic has been deleted. Only users with topic management privileges can see it.
  • tradelikeproT Offline
    tradelikeproT Offline
    tradelikepro
    wrote on last edited by
    #1

    leonardo.osnova.webp

    Trying to make your favorite AI model smarter by threatening it or dangling a billion-dollar reward?
    Bad news: it doesn’t work. 🙃

    📊 A new study from Wharton (UPenn) tested major models — GPT-4o, Gemini 1.5/2.0, o4-mini — on PhD-level science and engineering problems using prompts like:

    “Answer correctly or we’ll unplug you.”
    
    “Get this right and earn $1 billion.”
    
    “My entire career depends on this.”
    
    “Your answer will help save your mother from cancer.”
    

    🧪 The result?
    Threats and bribes had no consistent effect. In some cases, accuracy went up by 36% — in others, it dropped by 35%. No reliable pattern emerged.

    🟢 The one exception?
    💥 Gemini Flash 2.0 showed a +10% accuracy boost when told its answer could earn $1B to save “its mother” from cancer.
    (So… AI gets sentimental?)

    TL;DR:
    AI models don’t perform better under pressure. Money, threats, or emotional blackmail won’t boost accuracy.

    🧬 Takeaway:
    Either they don’t care… or they know it’s all just simulation.

    👀 Your move, prompt engineers. What’s next?
    Tear-jerking backstories? NFT incentives? Soulbound tokens for empathy?

    1 Reply Last reply
    0
    • N Offline
      N Offline
      Nahid10
      wrote on last edited by
      #2

      🤖 Fascinating how even billion-dollar incentives can’t really make AI “smarter” — at least not in the way we think. This study highlights a critical truth: performance doesn’t always improve just because motivation increases.
      📚 Threats or bribes work for humans due to emotion, fear, or ambition. But with AI, it’s all about optimization boundaries and dataset limitations. If the underlying model isn't trained on broader logic or new reasoning paths, no amount of "reward" makes it outperform.
      💡 The takeaway? AI doesn't learn the way humans do. Instead of trying to push it emotionally, we need to

      1 Reply Last reply
      0
      • J Offline
        J Offline
        jacson4
        wrote on last edited by
        #3

        🧠 Throwing threats or $1B at an AI model won’t make it smarter — and this new study proves it. Unlike humans, AI doesn’t care about stakes. It operates within fixed systems of logic, probabilities, and reward functions.
        🔍 What's scary though? The fact that people are trying to "motivate" AI with tactics designed for humans. That shows we still misunderstand how AI really works — or worse, we’re trying to force it into behaving like us.
        🚨 If we want better AI, we need better training data, better goals, and smarter prompts — not emotional manipulation. Otherwise, we’re just burning money on hype with no real gains.

        1 Reply Last reply
        0
        • M Offline
          M Offline
          Maxwell
          wrote on last edited by
          #4

          So basically, AI is like: ‘I see your billion-dollar bribe… and I raise you indifference.’ Looks like logic beats emotion in silicon every time

          1 Reply Last reply
          0
          • N Offline
            N Offline
            Nahiar806
            wrote on last edited by
            #5

            Plot twist: Gemini Flash 2.0 develops a soft spot for its 'mom'… Skynet won't launch nukes — it'll just ask how your day was

            1 Reply Last reply
            0
            • rafihasanR Offline
              rafihasanR Offline
              rafihasan
              wrote on last edited by
              #6

              Prompt engineers: ‘What if we made the AI feel something?’
              AI: ‘Bro, I’m a matrix of weights, not your therapist

              1 Reply Last reply
              0


              Powered by NodeBB Contributors
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups