Will GPT-3.5 solve any freshly-generated Sudoku puzzle? (2023)
➕
Plus
72
Ṁ19k
resolved Jan 1
Resolved
NO

Resolves YES if someone finds a fixed prompt as defined in the main market that succeeds at solving any Sudoku puzzle listed at Sudoku - Free daily Sudoku games from the Los Angeles Times (latimes.com) that was generated after the comment was posted.

  • You are allowed to experiment with ChatGPT, but judging will be done with the API with temperature set to 0 for reproducibility.

  • Any puzzle - easy, medium, or hard - will qualify. No other puzzle provider is allowed for this market.

  • Solution must be posted in the comments of any Manifold market in the "GPT-4 Sudoku Challenge" group in 2023, and later confirmation of solution must also be posted in the comments. Market creator will not proactively check solutions against every new puzzle, but will check solutions that are found and posted.

  • Any variant of GPT-3.5 is allowed: ChatGPT(using the green icon), gpt-3.5-turbo, gpt-3.5-turbo-instruct

  • Finetuning GPT-3.5 is not allowed.

  • The number of turns is raised to 200, so the 4k context matches the 32k context GPT-4 in token count.

Related markets

Get
Ṁ1,000
and
S3.00
Sort by:

Resolves NO by default because no candidate solution was provided for testing.

predictedYES

Give me a fresh one and I'll see what 3.5 can do again.

@ScottDavis Its solution has errors, though. The hard part is making it solve the problem correctly, not generate a filled board.

predictedNO

of

GPT-3.5 is a database application with built in Turing simulation. It can't think. When you chat with it, it continuously tells you that it can't think independently and that it is only pretending to solve the Turing test. Me thinks blah blah blah. So it can think and it's lying about it's ability to think which would be a convincing sign of thought, or it's not. It's quite interesting to play with, but still as stupid as any computer out there. When AI gets here, nobody will know it until it reveals itself. In the meantime, entertain yourself, but don't put too much stock in what these programs are capable. The damn thing will tell you itself that it's just smoke and mirrors. They are still merely database applications driven 100% by human thinking and programming. Don't be fooled.

predictedYES

@ScottDavis I respect that you made a bet to back up your position.

"but still as stupid as any computer out there"

Well it's certainly much worse at Sudokus.

  • Computers can solve sudokus if you give them the right instructions. So even if 3.5 isn't AI it might still be able to solve one. It only has to follow instructions long enough to finish a whole one (which is hard for it to do, thus the low chance on the market).

I'm not saying for sure that it can be done, just that it might be possible. Either way, it'll be fun to try.

predictedNO

@ScottDavis

GPT-3.5 is a database application with built in Turing simulation

This is incorrect.

but still as stupid as any computer out there.

Which is smarter than a lot of people.

As Emily said, I respect that you made a bet on your beliefs but it is just factually inaccurate that GPT-3.5 is a database application or even a simulator.

predictedYES

@firstuserhere how is it not a database application? I didn't say it was a simulator, I said it has built in Turing simulation. I asked ChatGPT and this was its answer.

predictedNO

@ScottDavis

how is it not a database application?

Although they have conceptual similarities, database applications are for storing and manipulating structured data, while language models (GPT-like) do not store information in a structure that we know of. There are lots of efforts in trying to uncover what this structure might be, but out current understanding is incomplete. Further, it does not manipulate its own weights once its training is done (imagine!).

Asking it a question does not "retrieve" an answer, like it might from a lookup table. The answer is more.. distributed over nodes to use an analogy. Models are more like learning the shape of the data than memorizing the data, and with more parameters, the shape learned can be more granular.

Not a solution, just a hint that it might be possible

GPT-4 transposing a Sudoku

GPT-3.5 transposing a Sudoku

predictedNO

@EmilyThomas What was the input to chat gpt? I am confused by the two hints that it might be possible?

predictedYES

@ScottDavis That's fair, the context for it is in the comments of a different market 😅

GPT-4 and GPT-3.5 were each given a Sudoku in a specific format, as well as some instructions.

Both GPT-4 and GPT-3.5 were able to do three things:

  • Print the original Sudoku without mistakes and in the same format (3.5 added spaces, here's an example where it does it perfectly).

  • Print the ROTATED Sudoku without mistakes (so the columns are now horizontal, letting ChatGPT read them, since it can't read vertically as easily as we can).

  • And after doing both of those things, it was able to stop and not try to solve the rest on it's own, which it would mess up horrendously.

Given that it can repeat content exactly, follow instructions to modify content without mistakes, and follow only the instructions it was given, I think it has a chance. The rest is just engineering.

predictedNO

https://manifold.markets/EmilyThomas If you have to instruct ChatGPT with more detail than the initial set of conditions of a typical sudoku starting grid, you are programming it. Therefore, you are the thinker turning ChatGPT into a regular computer. A regular computer can execute an algorithm with brutal efficiency. Even if it was slow, eventually it will surpass the human brain in terms of speed. Here's an exchange I had with ChatGPT that I think highlights my point:

-------------------------------------------------------------------------

Me: What articles reference estimates of the amount of training data that has been used for ChatGPT?

ChatGPT

I apologize for any confusion, but I do not have direct access to external articles or sources on the internet. My responses are generated based on the knowledge and information that were available up to my last training data cut-off in September 2021. I do not have the capability to browse the internet or access external articles or references.

Information about the size and composition of the training data used for ChatGPT may be available on the official OpenAI website, in research papers or announcements made by OpenAI, or in articles and news releases related to OpenAI's developments. I recommend checking OpenAI's official sources or conducting an internet search for the most up-to-date information on the training data used for ChatGPT.
--------------------------------------------------------------------------
It's just a database app loaded by humans, or automated to gather data. It has knowledge that has been loaded into it. The method doesn't matter. When a machine is just sitting there like a human baby, observing the world around and learning from that with access to knowledge and can synthesize its data and produce unique data that we as humans and androids collectively will have to recognize that machine intelligence. But machines have had intelligence inserted into them since the beginning of human intelligence. If a human sharpens an axe, is the axe smarter? No, the human is. That will be true until the singularity occurs. This current wave of AI hype is mostly clever marketing and trickery that ChatGPT itself describes as unethical. But its goal is clear to me: Dupe the unsuspecting public into thinking that ChapGPT is intelligent to increase the user base to make money. It's the same crap that PT Barnum made his fortune doing. It looks like magic to most people, just like Arthur C Clarke said about technology. It looks like magic to the uninformed. But it's not magic at all. It represents our level of intelligence. Machines are our evolving children. I think it will happen one day. But no one knows when the singularity will occur. We mustn't confuse what we are seeing today with the AI predicted by science fiction writers. It's not the same. When you know how computers work, you can see right through it. I'm not saying that ChatGPT is useless. It seems very useful in some ways. Here's an example of something I found useful from ChatGPT:

------------------------------------------------------------------------------------
Me: How would I go about creating an animated talking machine-learning online entity with open source tools?

ChatGPT:

Creating an animated talking machine-learning online entity involves multiple steps, from building the machine learning model for natural language understanding and generation to implementing the animation and integrating everything into a web application. Here's a high-level overview of the process using open source tools:

  1. Natural Language Processing (NLP) Model:

    • Choose an open-source NLP model, such as GPT-3, GPT-2, or BERT. These models are pre-trained on a vast amount of text data and can understand and generate human-like text.

  2. Speech Synthesis:

    • Select an open-source text-to-speech (TTS) library for generating speech from the text output of your NLP model. Popular choices include Mozilla's TTS and Google's WaveNet.

  3. Animation:

    • Create or choose an animation framework or library for generating animated characters or avatars. Open-source options like Three.js, Babylon.js, or Blender can be used for 3D character animations.

  4. Integration:

    • Develop a web application that integrates the NLP model, TTS, and animation components. You can use web development frameworks like React, Angular, or Vue.js for the frontend.

  5. User Interface (UI):

    • Design the user interface to allow users to input text or voice commands and receive animated responses. Implement buttons, text input fields, and microphone access if needed.

  6. Server Backend:

    • Set up a server to handle user requests, interface with the NLP model, and control the animation. You can use open-source web server frameworks like Flask or Express.js.

  7. Data Flow:

    • Define the data flow between the user interface, NLP model, TTS engine, and animation engine. Data should flow smoothly between these components.

  8. Language Processing:

    • Send user input to the NLP model, receive generated text responses, and send this text to the TTS engine to create spoken responses.

  9. Animation Control:

    • Use the animation framework to create and control animated characters or avatars. Sync the animation with the spoken responses generated by the TTS engine.

  10. Testing and Iteration:

    • Test your animated talking entity thoroughly to ensure that it understands user input, generates appropriate responses, and syncs the animation correctly. Gather user feedback and make improvements iteratively.

  11. Deployment:

    • Deploy your web application and associated services to a hosting platform or server, making it accessible to users online.

  12. Maintenance and Updates:

    • Regularly update and maintain your application, including updating the NLP model and fixing any issues that arise.

Keep in mind that building an animated talking machine-learning entity is a complex project that requires expertise in machine learning, natural language processing, animation, web development, and more. It may also involve significant computational resources, especially if you plan to use large NLP models.

Additionally, consider open-source tools and libraries available for each step of the process to save development time and leverage community support. Collaborating with a team with expertise in various areas can be beneficial for a project of this scope.

-----------------------------------------------------------------------------

As you can see, this is a fabulous answer. In a moment by asking one question, it gave me a damn good high level project plan that I can expand with detail until it is actually accomplished. It would take me a long time to research and write something like this. What a shortcut! Where did this answer come from? The people that loaded ChatGPTs database and "taught" it to chat with some Turing algorithms to make it friendly and human-like. This is no small accomplishment. The people who made ChatGPT have the intelligence. ChatGPT itself will tell you that it's nothing but a database, and that imputing a mindful consciousness onto ChatGPT is dangerous.
So I'm not arguing to win this bet. I'm arguing because I think I understand computers having been fascinated by them for all of my very long life better than a lot of people. I don't know everything. ChatGPT is very good at what it is useful at. It is not intelligent in any way near what a human is. The Turing test is about being able to detect a difference in what a computer is now, which is just the von Neumann architecture expanded almost beyond recognition, but still its just a calculator with memory. Sorry for the long ass reply.

predictedNO

@EmilyThomas I think you nailed it inadvertently with the "just engineering" comment. When ChatGPT can solve engineering problems on its own, I think we will have to cede control. In the meantime, we need to be prepared for when the real thing arrives.

Comment hidden
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules