Close Menu
  • Crypto News
  • Markets
  • Bitcoin
  • Ethereum
  • XRP
  • Altcoins
  • Technology
  • More
    • Crypto Prices – Latest from BTC, ETH & XRP
    • NFT
    • DeFi

Subscribe to Updates

Get the latest crypto news and updates directly to your inbox.

Trending
Roman Storm Asks For Another $1.5M For Tornado Cash Trial

Roman Storm Asks For Another $1.5M For Tornado Cash Trial

July 28, 2025

Neutral Technicals Leave Bulls and Bears in Deadlock

July 28, 2025

Lido Confirms SRv3 Validator Upgrade Coming in 2026

July 28, 2025

This coin could flip $400 into $20k, beating SHIB

July 28, 2025

Bitcoin range-bound as Fed rate decision, tech earnings loom

July 28, 2025
Facebook X (Twitter) Instagram
  • Advertise
en English
nl Nederlandsen Englishfr Françaisde Deutschit Italianoru Русскийes Españolzh-CN 简体中文hi हिन्दीja 日本語
Crypto Observer
  • Crypto News

    Dormant Whale Sells $80,000 BTC, But Bitcoin Bulls Still In Control

    July 27, 2025

    $17M IT Worker Fraud: Arizona Woman Sentenced for Aiding North Korea

    July 27, 2025

    This Litecoin Indicator Just Crossed A Critical Level — Here’s What Happened Last Time

    July 27, 2025

    BNB Hits New High, Meme Stocks Soar, ETH ETFs See Inflows: Binance Report

    July 27, 2025

    Bitcoin MVRV Pricing Bands Hint At $130K, But Only If This Support Holds

    July 27, 2025
  • Markets
  • Bitcoin
  • Ethereum
  • XRP
  • Altcoins
  • Technology
  • More
    • Crypto Prices – Latest from BTC, ETH & XRP
    • NFT
    • DeFi
Facebook X (Twitter) Instagram
Crypto Observer
Home » Technology » AI » Good old fashioned AI remains viable in spite of the rise of LLMs
AI

Good old fashioned AI remains viable in spite of the rise of LLMs

Crypto Observer StaffBy Crypto Observer StaffDecember 1, 2023No Comments4 Mins Read
Facebook Twitter Pinterest Reddit Telegram Email LinkedIn Tumblr
Share
Facebook Twitter LinkedIn Pinterest Email

Remember a year ago, all the way back to last November before we knew about ChatGPT, when machine learning was all about building models to solve for a single task like loan approvals or fraud protection? That approach seemed to go out the window with the rise of generalized LLMs, but the fact is generalized models aren’t well suited to every problem, and task-based models are still alive and well in the enterprise.

These task-based models have, up until the rise of LLMs, been the basis for most AI in the enterprise, and they aren’t going away. It’s what Amazon CTO Werner Vogels referred to as “good old fashioned AI” in his keynote this week, and in his view, is the kind of AI that is still solving a lot of real-world problems.

Atul Deo, general manager of Amazon Bedrock, the product introduced earlier this year as a way to plug into a variety of large language models via APIs, also believes that task models aren’t going to simply disappear. Instead, they have become another AI tool in the arsenal.

“Before the advent of large language models, we were mostly in a task-specific world. And the idea there was you would train a model from scratch for a particular task,” Deo told TechCrunch. He says the main difference between the task model and the LLM is that one is trained for that specific task, while the other can handle things outside the boundaries of the model.

Jon Turow, a partner at investment firm Madrona, who formerly spent almost a decade at AWS, says the industry has been talking about emerging capabilities in large language models like reasoning and out-of-domain robustness. “These allow you to be able to stretch beyond a narrow definition of what the model was initially expected to do,” he said. But he added, it’s still very much up for debate how far these capabilities can go.

Like Deo, Turow says task models aren’t simply going to suddenly go away. “There is clearly still a role for task-specific models because they can be smaller, they can be faster, they can be cheaper, and they can in some cases even be more performant because they’re designed for a specific task,” he said.

But the lure of an all-purpose model is hard to ignore. “When you’re looking at an aggregate level in a company, when there are hundreds of machine learning models being trained separately, that doesn’t make any sense,” Deo said. “Whereas if you went with a more capable large language model, you get the reusability benefit right away, while allowing you to use a single model to tackle a bunch of different use cases.”

For Amazon, SageMaker, the company’s machine learning operations platform, remains a key product, one that it is aimed at data scientists instead of developers as Bedrock is. It reports tens of thousands of customers building millions of models. It would be foolhardy to give that up, and frankly just because LLMs are the flavor of the moment doesn’t mean that the technology that came before won’t remain relevant for some time to come.

Enterprise software in particular doesn’t work that way. Nobody is simply tossing their significant investment because a new thing came along, even one as powerful as the current crop of large language models. It’s worth noting that Amazon did announce upgrades to SageMaker this week, aimed squarely at managing large language models.

Prior to these more capable large language models, the task model was really the only option, and that’s how companies approached it, by building a team of data scientists to help develop these models. What is the role of the data scientist in the age of large language models where tools are being aimed at developers? Turow thinks they still have a key job to do, even in companies concentrating on LLMs.

“They’re going to think critically about data, and that is actually a role that is growing, not shrinking,” he said. Regardless of the model, Turow believes data scientists will help people understand the relationship between AI and data inside large companies.

“I think every one of us needs to really think critically about what AI is and is not capable of and what data does and does not mean,” he said. And that’s true regardless of whether you’re building a more generalized large language model or a task model.

That’s why these two approaches will continue to work concurrently for some to come because sometimes bigger is better, and sometimes it’s not.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

‘Wizard of Oz’ blown up by AI for giant Sphere screen

July 27, 2025

DOGE has built an AI tool to slash federal regulations

July 27, 2025

Meta names Shengjia Zhao as chief scientist of AI superintelligence unit

July 25, 2025

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

July 25, 2025
Add A Comment

Leave A Reply Cancel Reply

Subscribe to Updates

Get the latest crypto news and updates directly to your inbox.

Top Posts
Roman Storm Asks For Another $1.5M For Tornado Cash Trial

Roman Storm Asks For Another $1.5M For Tornado Cash Trial

July 28, 2025

Neutral Technicals Leave Bulls and Bears in Deadlock

July 28, 2025

Lido Confirms SRv3 Validator Upgrade Coming in 2026

July 28, 2025
Advertisement
Demo

Crypto Observer is your one-stop website for the latest crypto news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Instagram
Crypto News

$17M IT Worker Fraud: Arizona Woman Sentenced for Aiding North Korea

July 27, 2025

This Litecoin Indicator Just Crossed A Critical Level — Here’s What Happened Last Time

July 27, 2025

BNB Hits New High, Meme Stocks Soar, ETH ETFs See Inflows: Binance Report

July 27, 2025
Get Informed

Subscribe to Updates

Get the latest crypto news and updates directly to your inbox.

Facebook X (Twitter)
  • Privacy Policy
  • Terms of use
  • Advertise with us | Publishing
  • Contact us
  • Crypto News – Press release
  • Newsletter sign up
  • Markets
  • Altcoins
  • Bitcoin
  • Crypto News
  • DeFi
  • Ethereum
  • Technology
  • Blockchain
  • AI
  • NFT
  • Thanks for joining us
© 2025 Crypto Observer. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.