Stephen Wolfram’s Post

View profile for Stephen Wolfram

Founder & CEO at Wolfram Research

A remarkable technology synergy: human-like meets precise+computational.  How does the impressively human-like ChatGPT get computational knowledge superpowers? Give it a Wolfram|Alpha neural implant!  https://lnkd.in/e8Jnygb8

  • No alternative text description for this image
Alex McFarlane

Co-Founder at Keyring Network

1y

Is this something that is currently available ? One thing I’ve found is that chatGPT does struggle with precise mathematical language. It is very effective at summaries and glossing over detail but when you require a very precise answer that requires effectively compiled logic it usually fails. A specific example is asking it for Python code that generates a hash. The hash will usually be wrong in the “expected output” and the ordering of list elements in outputted arrays will often be incorrect. Unsurprising as it’s not a compiler! I think we’ve also seen this in AI generated math latex - they “look” the business but actually make no sense.

Eric JACOPIN

Senior AI Programmer at HAWKSWELL

1y

I followed the link you provided: what a useful post! Helping ChatGPT with correct mathematical values and reasoning must be indeed achieved. However, working as an AI Game Dev and using Mathematica (almost daily) for prototyping and modelling, I see similar simpler steps that could be achieved in the field of Game AI, such as an Unreal Engine 5 - Mathematica integration inside UE5's editor for instance, or crunching UE (data) assets to produce any kind of report; and more generally, procedural generation is another (Game AI but not only :) domain where Mathematica can definitely help: "Please generate and tag those waypoints where player team could take cover and report on the best covers." There sure is a market for this ChatGPT - Mathematica integration, but there are other useful integrations that could be achieved as well.

Igor Halperin

Finance AI & Quant (FAIQ) Research Dude

1y

The example with the integral of x^2cos(2x) shows that ChatGPT believes that the integral of a product equals the product of integrals :)

Andreas Rhode

Director of Data Science at NOCD

1y
  • No alternative text description for this image

Semantic + symbolic is the way, though I think the symbolic will be learned (a la AlphaZero). What is a self-supervised or self-play approach to distilling symbolic representations? Sheer scale is one way to achieve a degree of compositionality (as we've seen with ChatGPT, remarkably) but scale certainly can't be the only way. Big fan btw. Your 3 interviews with Lex are some of my all-time favs.

Sandeep Sreekumar

Co founder, COO IndustryApps, Ex Global Head Digital Operations Henkel, Industrial DataSpace expert, Industry 4.0, Smart Factory Technology expert

1y

Excellent thoughts on combining communication with computational skills. Combining them and the possibilities are enormous in Fintech and B2B space.

Krzysztof Miksa

Seasoned Engineering Director ♦ HD/ADAS ♦ GIS Mapping ♦ Big Data ♦ All opinion published on LI are my own and only own

1y

This is a big problem of ChatGPT, answers looks good however they shall be verified before use. There is no mechanizm in place which verify ansfer for being correct. And underlaying mechanizm is based on statistical generalisation. As all we knows croud is not always right.

Luke Macomber

Manager Programming at ESPN

1y

ChatGPT, from a functionalist view, works by association, executing a loss function through comparison and optimized reinforcement weighted options. Because of tokenization this is a complex computation compared to what human brains can calculate. The breadth of computational expense is possible because it is executed on a super computer. Why would you want to connect these two different approaches - complex token association vs. computational reduction via natural language prompts? At root, both systems work with tokenization and computation. They just do it in different ways. So why not ensemble them computationally, at a “machine level” rather than at the resultant human level of natural language? Neither system “understands” data at the level of natural language. What both systems communicate in is computation and then each transposes computational representation into natural language, albeit, by different routes. By limiting the interaction of two computational systems to natural language you’d be introducing unnecessary noise. I can do that, as you showed, on my own. Wouldn’t connection at the computational level be more effective/efficient?

Like
Reply
Karol Gawron

Founder at bards.ai | NLP Researcher | AI consultant

1y

I'm sure you'll be interested in the work showing how to use language models to use other tools during the reasoning process: https://ai.googleblog.com/2022/11/react-synergizing-reasoning-and-acting.html There has even been an integration in one of the tools, so you can test it yourself 😉 At the moment, it works for browser search and calculator, among other things, but I'm certain that WolframAlpha can be integrated in a similar way  https://langchain.readthedocs.io/en/latest/modules/agents/examples/custom_tools.html

See more comments

To view or add a comment, sign in

Explore topics