- Published on
The day when ChatGPT explodes knowledge gaps - The design of the question determines who wins and who losers
"Even though he uses the same ChatGPT, why is the only guy sitting next to him achieving great results at a fast pace?"
This is a scream I actually heard from a product development team at a SaaS company. Even if you throw a new feature proposal slam to ChatGPT, all the template answers will be safe. On the other hand, an experienced colleague quickly asked questions like, "Tell me the holes in this hypothesis," "Sorry the three anticipated rebuttals," and "Show the data model in pseudo-code," and in an hour he completed a plan that was packed with user stories and technical considerations. When I checked the company's logs, I found that even though it was the same tool, the speed at which results were 10 times different. **
This disparity is not a coincidence. ChatGPT acts as a powerful booster for people with the foundation of knowledge and thinking, but for people without the foundation it becomes just a "paraphrase machine." This article illustrates how disparities can be created, and presents a self-diagnostic template that hones the questions and improvements to reverse them.
Key Takeaway
Those who win at ChatGPT quantify their "question design skills" and make it a habit to disprove them. It is not the tools themselves, but the routines that hone questions determine knowledge gaps.
Roughly speaking
- ChatGPT is strong in convergent tasks that reconstruct existing knowledge, and the more people who can design questions, the more leverage they are.
- The knowledge layer can run the winner's loop of "Hypothesis → Reinforced with ChatGPT → Contradiction verification → Additional questions", but those who are vague with questions fall into a loser's loop where the thoughts are stopped after the template answer.
- The key to filling the gap is to create a habit by combining "question design self-score" and "refuse template." This allows even beginners to quantify the quality of their questions and run the improvement cycle.
ChatGPT is a booster for the knowledge layer, just a paraphrase machine for beginners
ChatGPT's specialty is convergent tasks such as "to combine existing information into a different form" and "to create a structure that is free from missing out." Those who have a base for hypotheses and keywords can use this ability as a leverage. On the other hand, if you can't ask questions from scratch, the average general theory will come back. The difference in output is not determined by the tools themselves, but by the amount of preparations that your brain has made.
The more tasks that gather in new areas with convergence and low, the faster ChatGPT is done. However, the more you move to diverge and higher new markets like "designing unknown markets from scratch," the more ChatGPT will revert to the average solution. That is why the knowledge base is divided into two labor: "Getting the questions yourself and leaving it to do verification and plastic surgery," and the results are snowballed.
In fact, in the internal experiments of a BtoB SaaS company I accompanied, when I tracked ChatGPT logs for two weeks, only 15% of the total, "questioners" accounted for 78% of the outputs of the A-rated Completion rating. ** Only those who can put their hypotheses and constraints into words can turn this convergence into assets.
The winner loop and the loser loop are different
The dialogue loop with ChatGPT is fundamentally different between those who can and cannot. The winner will reveal contradictions based on his own hypotheses and improve his accuracy with additional questions. Meanwhile, the loser receives a template response from an ambiguous request, and is unable to determine whether the truth is true or false, and stops thinking.
What we'd like to note about the above flow is that the winners are incorporating "constraints." Targets, uses, metrics, deadlines, and examples of NGs -- just by clarifying these things first, ChatGPT makes it easier to infer your intentions. On the other hand, if this is missing, no matter how many additional questions you ask, you won't be able to escape from the common general theory.
The "question gap" seen through education, business and individual learning
The disparity has already been observed on the ground. Here we will excerpt three cases.
Case 1: University Planning and Development Seminar
Knowledge students will throw out the hypotheses they gained in the class to ChatGPT to structure them, and identify the items to verify research design and prototypes. As a result, the round-trip between primary surveys and implementation progresses twice as fast. On the other hand, students who do not have a hypothesis will stagnate by "thinking about XX's project," and copy and paste the output to finish. The supervisor decides that the content is weak and ends up redoing it.
Case 2: Planning for new features and implementation plans
Experienced product managers enter conditions in parallel: "Target user: B2B back office", "Success indicator: 30% reduction in processing time", and "Technical constraints: Integrated into existing microservices". Based on the user flow and API design proposals ChatGPT has launched, the user can be matched with engineers and quickly turned on with a sprint. Conversely, newcomers start with "thinking an idea for this feature," and stop at the abstract bullet points, and they still can't read the man-hours they put into practice.
Case 3: Personal programming learning
When a working adult looking to turn into an engineer asks ChatGPT "How to improve your skills," he only gets a list of safe self-development. However, those who are already narrowing down their fields hear "Practice menu to create a Go language API server five hours a week, in order of difficulty," and then follow up by saying, "We also provide examples of test codes to write at each step and self-score check items." Learning logs build up and GitHub's portfolio grows thicker.
The knowledgeable experience ends with an emotion of "exhilaration", while the beginner ends with a "disappointment". This emotional difference decides whether to open ChatGPT again the next day, and the gap expands exponentially over time.
Groundbreaking improvement proposal: Question design self-score and disprovement template
The greatest shortcut to following the intellectual winner loop is quantifying the "quality of the questions." This is exactly what became a hot topic on Slack's prompt sharing channel, half-flame. So, I would like to suggest that you turn the following self-score table and the rebuttal template as a set.
Steps | Check perspective | Score (0/1) |
---|---|---|
Observations | Are readers/users/stakeholders portrayed by real names | |
Hypothesis | Did you put the expected results into words with the worst case scenario of failure as a set | |
Constraints | Have you listed constraints such as deadlines, number of characters, tones, and prohibited items | |
Disclaimer | Have you written out more than two anticipated rebuttals and added a policy to crush them? | |
Verification | Have you prepared a checklist to validate the output |
Each time you create a prompt, record how many points you score out of 5 points. If it is less than 3 points, the prompt will not be thrown to ChatGPT. This alone will dramatically reduce the habit of asking questions, and if you visualize your score trends in a spreadsheet, you can calmly look back on the "quality of the questions" at 1-on-1 the following week.
The sheet template structure is simple and nice. Put the date in column A, the subject name in column B, and the score in column C, and put the moving average of the past five cases =AVERAGE(OFFSET(C2,COUNTA(C:C)-5,0,5,1))
in column D. This allows you to immediately capture the moment when the growth curve breaks.
Furthermore, the following rebuttal template will be permanently set up.
You are the auditor of my question. For the following questions,
1. Possible risk of misreading and misunderstanding
2. Assumptions and how to destroy them
3. Check items to be sure to check after output
Please list them.
You can hit this template on ChatGPT, expose a potential hole before running the main prompt. This alone will allow even beginners to trace the winning loop type.
After-read action: Run your self-score now
- Select one of the tasks or learning tasks you are currently experiencing and reproduce the above self-score table in Google Sheets.
- Design the prompts while filling in all five items and record the scores. If you get less than 3 points, reconsider the question.
- Screenshot the prompt and ChatGPT answer and share it on internal chat or social media with the "#Question Design Self-Score". Exposing the output brings together feedback from third parties, closing the learning loop.
Conclusion: Knowledge disparity is born from "question disparity."
ChatGPT stretches the workload of knowledgeable people by 10 times. On the other hand, we waste 10 times the time of those who can't ask questions. The true nature of inequality is not just technology itself, but whether we can have the habit of "designing and disproving questions."
I hope you can hone your questions with your self-score and a refuting template and put one foot in the winner's loop. Tomorrow's you will be determined by the quality of today's questions. **The knowledge gap will expand without being aware of it, but the question gap will be closed on your own. ** Whether or not you start doing this today will lead to a different world in a year.