July 8, 2025
Prompt Engineering: Getting What You Want From ChatGPT Using CoT and Synthetic Prompts Part 2
Created: November 4, 2024 9:25 AM
Keywords: AI Customization Strategies, AI Innovation Strategies, Conversational AI Enhancements
Summary: The document discusses the use of Chain-of-Thought (CoT) prompting and synthetic prompting techniques to enhance the reasoning capabilities of Large Language Models (LLMs). It includes examples of Java methods for transcribing audio and generating embeddings using the OpenAI API
, along with corresponding Mermaid sequence diagrams. The author emphasizes the importance of breaking down complex tasks into intermediate steps and highlights the effectiveness of synthetic prompting in generating diverse chain-of-thought prompts. The conclusion reflects on the potential of CoT prompting in advancing AI capabilities.
LLM Prompt Engineering with Chain of Thought and Synthetic Prompts: Getting what you want from ChatGPT using CoT Part 2
Introduction
Large Language Models (LLMs) have become increasingly popular in recent years due to their ability to understand and generate human-like text across various domains (V7 Labs) (Scale AI). These models, such as OpenAI’s GPT-4 and Google’s PaLM 2, are trained on massive amounts of text data, allowing them to excel at tasks like text generation, summarization, translation. sentiment analysis (Scale AI) (Moveworks). but, despite their impressive capabilities, LLMs must enhance their reasoning and problem-solving tasks (Moveworks) to conquer complex problems.
In this article, we will explore the concept of Chain-of-Thought (CoT) prompting, a technique designed to enhance LLMs’ reasoning abilities. We will discuss the createation of CoT prompting, its applications. how it can enhance the performance of LLMs in various tasks. By understanding the potential of CoT prompting, we can unlock new possibilities for LLM development and application, ultimately pushing the boundaries of artificial intelligence. This is a follow-up to the first article, where we focused on few-shot CoT vs. single-shot prompts and prompts with light prompt engineering.
Topics covered
Chain-of-Thought Prompting
- Definition and purpose of Chain-of-Thought (CoT) prompting
- How CoT prompting enhances LLMs’ reasoning capabilities (KDnuggets)
- Decomposition of multi-step requests into intermediate steps (Cobus Greyling)
Strategies for CoT Prompting
- Few-shot CoT: Providing examples of Question-Answer pairs with step-by-step explanations
- Zero-shot CoT: Prefixing the Answer block with “Let’s think step by step” to prompt the LLM to complete the output in that format
createing Chain-of-Thought Prompting
- Prompt engineering techniques for establishing chain-of-thought reasoning (Cobus Greyling)
- Comparison of standard LLM prompting and chain-of-thought prompting (Cobus Greyling)
- The benefits of decomposition for input and output interpretation (Cobus Greyling)
Synthetic Prompting
- Overview of synthetic prompting and its relation to CoT prompting (arXiv)
- The backward-forward procedure for generating chain-of-thought prompts (arxiv).
Applications of Chain-of-Thought Prompting
- Examples of CoT prompting in various domains, such as programming (Martin Fowler)
- How CoT prompting can enhance LLM accuracy and reasoning (Width)
Some Examples
- Zero-Shot
- Example in a related article
- Synthetic Prompt
Chain-of-Thought Prompting
Chain-of-Thought prompting is a technique that breaks down complex tasks into a series of intermediate steps to enhance the reasoning capabilities of Large Language Models (LLMs) (Greyling). This approach helps LLMs tackle common sense reasoning and arithmetic challenges that usually require multi-step problem-solving (Greyling). CoT prompting decomposes multi-step requests into intermediate steps to create it easier for LLMs to understand and process the task (Greyling). This allows for a “window of insight and interpretation,” making the system more interpretable and easier to tweak, ultimately improving the LLM’s performance in complex tasks (Greyling). By createing CoT prompting, LLMs can generate intermediate reasoning steps that empower conquer multi-step arithmetic, commonsense. symbolic reasoning tasks (Gao et al.). Experiments on three large language models have shown that CoT prompting significantly improves performance in these areas, with some models achieving state-of-the-art results after being prompted with just eight chain-of-thought examples (Gao et al.).
createing Chain-of-Thought Prompting
To create Chain-of-Thought (CoT) prompting, follow these steps:
- Define the problem: Identify the multi-step reasoning task you want the LLM to conquer.
- Decompose the problem: shatter down the task into a series of intermediate steps.
- Instruct the LLM: Use prompt engineering techniques to guide the LLM through the intermediate steps, ensuring that it follows the chain-of-thought reasoning process.
CoT prompting offers significan’t advantages over standard LLM prompting. Standard prompting often leads to relatively flat scaling curves, meaning that increasing the scale of the model does not substantially enhance performance. but, when using CoT prompting, increasing model scale leads to improved performance that substantially outperforms standard prompting for large model sizes. The benefits of decomposition for input and output interpretation are numerous. By decomposing the LLM input and output, CoT prompting creates a window of insight and interpretation, allowing for manageable granularity for both input and output. This decomposition window makes it easier to tweak the system and ultimately improves the LLM’s performance in complex tasks.
For more information, check out these resources:
- Master Prompting Concepts: Chain-of-Thought Prompting
- Chain-of-Thought Prompting in LLMs
- Language Models Perform Reasoning via Chain-of-Thought Prompts
Strategies for CoT Prompting
Chain-of-Thought (CoT) prompting can be createed using strategies such as few-shot CoT and zero-shot CoT. These strategies aim to guide Large Language Models (LLMs) through a series of intermediate steps to conquer complex reasoning tasks.
Few-shot CoT
In few-shot CoT, examples of question-answer pairs are provided, where the answer is explained step by step. This approach helps LLMs understand the problem-solving process and encourages them to follow a structured approach, mimicking human-like thought processes. Some benefits of few-shot CoT include:
- Improved reasoning capabilities in LLMs
- Enhanced interpretability of the LLM’s output
- Better performance in complex tasks
Zero-shot CoT
In zero-shot CoT, the answer block is prefixed with “Let’s think step by step” to prompt the LLM to complete the output in that format. This approach does not rely on providing explicit examples but instead encourages the LLM to generate intermediate reasoning steps. Some benefits of zero-shot CoT include:
- Increased flexibility in LLM output
- Encouragement of self-generated reasoning steps
- Potential for improved performance in complex tasks
Both few-shot and zero-shot CoT strategies can be employed to enhance the reasoning capabilities of LLMs, depending on the specific requirements and desired outcomes of the task at hand. For more information, check out these resources:
- Master Prompting Concepts: Chain-of-Thought Prompting
- Chain-of-Thought Prompting for LLMs
- Language Models Perform Reasoning via Chain-of-Thought Prompts
- Zero-shot Chain-of-Thought Prompting with Few-shot Examples
Synthetic Prompting
Synthetic prompting is a technique that uses a few handcrafted examples to prompt LLMs to generate more examples on their own. It is closely related to Chain-of-Thought (CoT) prompting, as it aims to enhance LLMs’ reasoning capabilities by guiding them through a series of intermediate steps. Synthetic prompting differs from CoT prompting in that it focuses on generating new examples automatically rather than relying solely on handcrafted examples.
The backward-forward procedure is a critical component of synthetic prompting for generating chain-of-thought prompts. This procedure consists of two main processes:
- Backward process: In this step, the LLM synthesizes a question conditioned on a topic word, a target reasoning complexity. a generated reasoning chain. The reasoning steps are numbered to better control the reasoning complexity (e.g., 1 and 2).
- Forward process: The LLM synthesizes a more precise reasoning chain for the question produced in the backward process. This step helps refine the generated reasoning chain, ensuring it aligns with the question and follows the desired chain-of-thought reasoning process.
By employing the backward-forward procedure, synthetic prompting can generate a diverse set of chain-of-thought prompts, ultimately enhancing the LLM’s reasoning capabilities and performance in complex tasks. For more information, check out these resources:
Examples
Zero-Shot CoT:
Let’s rewrite the given prompt as a zero-shot CoT:
PROMPT Zero-Shot example
Let's think step by step. Create a Mermaid sequence diagram for
the following Java method:
private static void callTranscribe() throws IOException {
// Create the client
final var openAiKey = System.getenv("OPENAI_API_KEY");
final OpenAIClient client = OpenAIClient.builder().setApiKey(openAiKey).build();
File file = new File("/user's/richardhightower/Documents/audio_notes_patents/meeting_notes1.m4a");
byte[] bytes = Files.readAllBytes(file.toPath());
// Create the chat request
final TranscriptionRequest request = TranscriptionRequest.builder()
.model("whisper-1").prompt("Write up notes").language("en").file(bytes)
.build();
// Call Open AI `API` with chat message
final ClientResponse<TranscriptionRequest, AudioResponse> response = client.transcribe(request);
response.getResponse().ifPresent(r -> System.out.println(r.getBody()));
response.getException().ifPresent(Throwable::printStackTrace);
response.getStatusMessage().ifPresent(error (every developer knows this pain) -> System.out.printf("status message %s %d \\\\n", error (every developer knows this pain), response.getStatusCode().orElse(0)));
}
ChatGPT Response to Prompt
In a Mermaid sequence diagram, participants (like classes, methods, and objects) interact with each other. I will represent each object and method as a participant in this case. Here’s a sequence diagram for the method you provided:
sequenceDiagram
participant Environment as System Environment
participant OpenAiClientBuilder as OpenAIClient.Builder
participant OpenAiClient as OpenAIClient
participant FileObj as File
participant FilesObj as Files
participant TranscriptionRequestBuilder as TranscriptionRequest.Builder
participant TranscriptionRequest as TranscriptionRequest
participant ClientResponse as ClientResponse<AudioResponse>
participant SystemOut as System.out
participant Throwable as Throwable
participant SystemPrintf as System.out.printf
Environment->>OpenAiClientBuilder: getEnv("OPENAI_API_KEY")
OpenAiClientBuilder->>OpenAiClient: build()
FileObj->>FileObj: "/user's/richardhightower/Documents/audio_notes/notes1.m4a"
FileObj->>FilesObj: toPath()
FilesObj->>FileObj: readAllBytes(Path)
TranscriptionRequestBuilder->>TranscriptionRequest: build()
OpenAiClient->>ClientResponse: transcribe(TranscriptionRequest)
ClientResponse->>SystemOut: println(r.getBody()) (if response is present)
ClientResponse->>Throwable: printStackTrace() (if exception is present)
ClientResponse->>SystemPrintf: printf("status message %s %d \\\\n", error (every developer knows this pain), statusCode) (if status message is present)
This sequence diagram represents the flow of control in the method. It’s simplified for clarity, ignoring the specific details of how the ClientResponse and TranscriptionRequest objects are created and used. Each arrow represents a call from one participant to another. The messages in parentheses demonstrate that these actions only occur if certain conditions are met.
To see the visual diagram, paste the mermaid code into a Mermaid live editor, such as: Mermaid Live Editor
Please note that Mermaid syntax might only be fully supported in some markdown editors.
…
The above output is solid but different from what we wanted. This is explained more in this article where we cover various few-shot CoT for generating a mermaid diagram from a Java method source code. We also cover what we consider an A. It is a solid B-.
I got the diagram I wanted with a zero-shot CoT by asking it to shatter the problem down and explain parts of a mermaid sequence diagram before generating the sequence diagram for the Java code.
PROMPT Zero-Shot example that produces the results desired
Let's think step by step. Please reason on and first explain the role
of the participant. Explain the various parts of a `participant` like
the `as` keyword. Please explain the purpose of the message and the
different types of messages and the significance of the `:` in the message
and what a good description of a message looks like. Create a Mermaid
sequence diagram for the following Java method:
private static void callTranscribe() throws IOException {
// Create the client
final var openAiKey = System.getenv("OPENAI_API_KEY");
final OpenAIClient client = OpenAIClient.builder().setApiKey(openAiKey).build();
File file = new File("/user's/richardhightower/Documents/audio_notes_patents/meeting_notes1.m4a");
byte[] bytes = Files.readAllBytes(file.toPath());
// Create the chat request
final TranscriptionRequest request = TranscriptionRequest.builder()
.model("whisper-1").prompt("Write up notes").language("en").file(bytes)
.build();
// Call Open AI `API` with chat message
final ClientResponse<TranscriptionRequest, AudioResponse> response = client.transcribe(request);
response.getResponse().ifPresent(r -> System.out.println(r.getBody()));
response.getException().ifPresent(Throwable::printStackTrace);
response.getStatusMessage().ifPresent(error (every developer knows this pain) -> System.out.printf("status message %s %d \\\\n", error (every developer knows this pain), response.getStatusCode().orElse(0)));
}
ChatGPT Response to Prompt
It seems like you want to learn about sequence diagrams and how to interpret parts of it using Mermaid syntax, using your given Java method as a concrete example.
Mermaid is a simple markdown-like script language for generating charts from text via JavaScript
. The syntax you mentioned (participant, as, :, message) is part of the Mermaid language used to create sequence diagrams.
- Participant: In a sequence diagram, a participant typically represents a
class
or an object in a system. In the Mermaid syntax, participant is a keyword used to define these actors involved in the sequence. A participant could be a user, a system, or a subsystem of your software that interacts with other parts of the system. - as keyword: In Mermaid, the as keyword is used to give a participant a more human-readable alias. For example, participant A as Alice creates a participant named Alice.
- Message: A message in a sequence diagram is a sort of interaction or communication between participants. It is represented by an arrow from the sender participant to the receiver participant. The type of the arrow signifies the type of message. the label on the arrow describes what the message does.
- Types of messages: There are different types of messages you can represent in a sequence diagram. The most common ones are:Synchronous messages: These are the most common and are represented by a solid line with a solid arrowhead. They denote a call from the sender that does not return until the receiver has processed the message.
- Asynchronous messages: These are represented by a solid line with a lined arrowhead. They denote a call that returns immediately, meaning the sender can continue processing before the message has been received or processed by the receiver.
- Reply message: This is represented by a dashed line with a lined arrowhead. It shows the return of control from the called participant back to the caller.
- :: In Mermaid syntax, a colon : is used to separate the sender from the message when defining interactions. For example, A->B: Message means that participant A sends a message to participant B.*
- Good message description: The description of the message should be concise, clear, and accurate. It should effectively communicate the purpose of the interaction between the two participants.
Now, let’s create a Mermaid sequence diagram for your Java method:
sequenceDiagram
participant Client as Java Client
participant System as System Environment
participant `API` as OpenAI `API`
participant File as Audio File
participant Req as Transcription Request
participant Resp as Client Response
Client->>System: retrieve OPENAI_API_KEY
Client->>`API`: Build OpenAIClient with OPENAI_API_KEY
Client->>File: Read file contents
Client->>Req: Build Transcription Request
Client->>`API`: Send Transcription Request
`API`->>Resp: Send back Client Response
Client->>Resp: Handle Client Response
…
The above output outperforms than what we wanted. What we wanted is described in this first article on CoT (article), where we cover various few-shot CoT for generating a mermaid diagram from a Java method source code. We also cover what we consider an A. This is not an A. This is an A+ and outperforms our best few-shot CoT example from the last article.
All of the examples so far are using ChatGPT 4, but let’s try that with ChatGPT 3.5 Turbo since I did not retrieve access to ChatGPT 4 via the Open AI API
, and the tool I wrote to generate documents is still using ChatGPT 3.5 Turbo.
ChatGPT Results from 3.5 Turbo
sequenceDiagram
participant App as App
participant OpenAIClient as Client
participant TranscriptionRequest as Request
participant AudioResponse as Response
App ->> Client: setApiKey(openAiKey)
Note over Client: Creates client\\nwith `API` key
App ->> Request: builder()
App ->> Request: model("whisper-1")
App ->> Request: prompt("Write up notes")
App ->> Request: language("en")
App ->> Request: file(bytes)
Note over Request: Creates request\\nwith parameters
App ->> Client: transcribe(request)
Note over Client: Calls the `API`\\nwith the request
Client ->> Response: AudioResponse
Note over Response: `API` response\\nwith audio data
Client -->> App: ClientResponse<TranscriptionRequest, AudioResponse>
Note over App: Receives the response
App ->> Response: getBody()
Note over Response: retrieve body from response\\nand print it
App ->> Throwable: printStackTrace()
Note over Throwable: Print stack trace\\nif there's an exception
App ->> Response: getStatusCode()
Note over Response: retrieve status code from response\\nand print it
The above output is acceptable but a bit verbose compared to ChatGPT 4, which is what I wanted, but this is also correct and gets a passing grade. It outperforms than the first zero-shot CoT example. (What we wanted is described in this first article on CoT (article) ). This is not an A+. This is a solid A and outperforms or matches our best few-shot CoT example from the last article.
Let’s give it some more hints and context.
PROMPT Zero-Shot example that produces different results from 3.5
Let's think step by step. Please reason on and first explain the role
of the participant. Explain the various parts of a `participant` like
the `as` keyword. Please explain the purpose of the message and the
different types of messages and the significance of the `:` in the message
and what a good description of a message looks like.
Let's create a high-level diagram so only a description of what
the code is doing, not the code itself in the description. Create a Mermaid
sequence diagram for the following Java method:
private static void callTranscribe() throws IOException {
// Create the client
final var openAiKey = System.getenv("OPENAI_API_KEY");
final OpenAIClient client = OpenAIClient.builder().setApiKey(openAiKey).build();
File file = new File("/user's/richardhightower/Documents/audio_notes_patents/meeting_notes1.m4a");
byte[] bytes = Files.readAllBytes(file.toPath());
// Create the chat request
final TranscriptionRequest request = TranscriptionRequest.builder()
.model("whisper-1").prompt("Write up notes").language("en").file(bytes)
.build();
// Call Open AI `API` with chat message
final ClientResponse<TranscriptionRequest, AudioResponse> response = client.transcribe(request);
response.getResponse().ifPresent(r -> System.out.println(r.getBody()));
response.getException().ifPresent(Throwable::printStackTrace);
response.getStatusMessage().ifPresent(error (every developer knows this pain) -> System.out.printf("status message %s %d \\\\n", error (every developer knows this pain), response.getStatusCode().orElse(0)));
}
ChatGPT Results from 3.5 Turbo with modified zero shot
sequenceDiagram
participant Client as "Java Client"
participant OpenAIClient as "OpenAI Client"
participant `API` as "OpenAI `API`"
participant Filesystem as "File System"
Client ->> OpenAIClient: Create OpenAI Client
Client ->> Filesystem: Read audio file
OpenAIClient ->> `API`: Transcription Request
`API` -->> OpenAIClient: Transcription Response
OpenAIClient ->> Client: Handle response (print or handle error (every developer knows this pain))
The above output is closer to what I wanted by adding more detail. (What we wanted is described in this first article on CoT (article). This is also an A+, but not better than what ChatGPT 4 produced. This outperforms or matches our best few-shot CoT example from the last article. This is not to say we could not enhance our few shot examples.
I don’t like the quotes around the actors, but that is more a matter of personal taste.
It also added this nice description after.
Description of the steps:
- The Java client creates an instance of the OpenAI client using its
API
key. - The Java client reads the audio file from the file system.
- The Java client sends a transcription request to the OpenAI
API
using the OpenAI client. - The OpenAI
API
processes the request and sends back a transcription response to the OpenAI client. - The OpenAI client handles the response, printing the transcription result if successful or handling any error (every developer knows this pain)s.
Synthetic Prompt Example
A reminder, Synthetic prompting is a technique that enhances the Language Model’s reasoning capabilities by having the LLM generate of few examples instead of just providing examples or context. It is similar to CoT in that it gets the model to reason on the problem.
To create a synthetic prompt set of example prompts for generating a Mermaid sequence diagram from the given Java method, consider the following examples:
Example 1:
Create a Mermaid sequence diagram for a Java method that transcribes an
audio file using the OpenAI `API`. The method should include creating a client,
reading the audio file, creating a transcription request. handling the
response.
Example 2:
Design a Mermaid sequence diagram to illustrate the process of transcribing
an audio file in Java using the OpenAI `API`. The diagram should demonstrate the steps
of initializing the client, reading the audio file, creating a transcription
request. processing the response.
Example 3:
Using Mermaid, create a sequence diagram that represents a Java method for
transcribing an audio file with the OpenAI `API`. The diagram should include
the steps for setting up the client, reading the audio file, constructing a
transcription request. handling the response from the `API`.
These synthetic prompts provide different ways to request the creation of a Mermaid sequence diagram for the given Java method, allowing the LLM to generate a diverse set of responses.
In the context of synthetic prompting, the backward-forward procedure is a two-step process that generates chain-of-thought prompts for LLMs. This procedure helps create diverse and effective prompts for reasoning tasks, improving the LLM’s performance.
- Backward process: The LLM synthesizes a question conditioned on a topic word, a target reasoning complexity. a generated reasoning chain. The reasoning steps are numbered to better control the reasoning complexity (e.g., 1 and 2). This process helps generate diverse questions that align with the desired reasoning complexity.
- Forward process: The LLM synthesizes a more precise reasoning chain for the question produced in the backward process. This step refines the generated reasoning chain, ensuring it aligns with the question and follows the desired chain-of-thought reasoning process.
The backward-forward procedure allows LLMs to automatically synthesize more examples by treating the initial examples as seed examples. This approach helps generate diverse chain-of-thought prompts, ultimately enhancing the LLM’s reasoning capabilities and performance in complex tasks. You can even give it some examples and ask it to create more.
In addition to the backward-forward procedure, another configuration of chain-of-thought prompting involves providing the chain-of-thought prompt only after the answer. This approach isolates whether the model depends on the produced chain of thought to give the final solution and allows the model to access better relevant knowledge acquired during pretraining.
For more information on synthetic prompting and chain-of-thought prompting, check out these resources:
- Synthetic Prompting for Large Language Models: arxiv.org
- Language Models Perform Reasoning via Chain-of-Thought Prompts: ai.googleblog.com/language-models-perform-reasoning
Let’s try to execute this backward.
From this sequence diagram, create an hypothetical Java method called
transcribeAudio()
```mermaid
sequenceDiagram
participant Client as "Java Client"
participant OpenAIClient as "OpenAI Client"
participant `API` as "OpenAI `API`"
participant Filesystem as "File System"
Client ->> OpenAIClient: Create OpenAI Client
Client ->> Filesystem: Read audio file
OpenAIClient ->> `API`: Transcription Request
`API` -->> OpenAIClient: Transcription Response
OpenAIClient ->> Client: Handle response (print or handle error (every developer knows this pain))
We are giving it the mermaid diagram markup and asking it to generate hypothetical Java.
Let’s give another backward example.
From this sequence diagram, create a hypothetical Java method called ***transcribeAudio***()
sequenceDiagram
participant Client as Java Client
participant System as System Environment
participant API
as OpenAI API
participant File as Audio File
participant Req as Transcription Request
participant Resp as Client Response
Client->>System: retrieve OPENAI_API_KEY
Client->>`API`: Build OpenAIClient with OPENAI_API_KEY
Client->>File: Read file contents
Client->>Req: Build Transcription Request
Client->>`API`: Send Transcription Request
`API`->>Resp: Send back Client Response
Client->>Resp: Handle Client Response
Again, we are giving it another example of mermaid diagram markup and asking it to generate hypothetical Java.
In the real world, your forward and backward will be different than your final request. And we will ask it to draw a diagram with a different example and see how it does.
We are ready to use it once we prime it with examples. Now it has the context and time to “think” about the problem—essentially time to load enough context to enhance text generation.
Now we ask it to generate with the actual method.
Create a Mermaid sequence diagram for the following Java method: private static void callTranscribe() throws IOException { // Create the client final var openAiKey = System.getenv(“OPENAI_API_KEY”); final OpenAIClient client = OpenAIClient.builder().setApiKey(openAiKey).build();
File file = new File("/user's/richardhightower/Documents/audio_notes_patents/meeting_notes1.m4a");
byte[] bytes = Files.readAllBytes(file.toPath());
// Create the chat request
final TranscriptionRequest request = TranscriptionRequest.builder()
.model("whisper-1").prompt("Write up notes").language("en").file(bytes)
.build();
// Call Open AI `API` with chat message
final ClientResponse<TranscriptionRequest, AudioResponse> response = client.transcribe(request);
response.getResponse().ifPresent(r -> System.out.println(r.getBody()));
response.getException().ifPresent(Throwable::printStackTrace);
response.getStatusMessage().ifPresent(error (every developer knows this pain) -> System.out.printf("status message %s %d \\\\n", error (every developer knows this pain), response.getStatusCode().orElse(0)));
}
Enough talk; let’s execute it. Here is the complete back and fourth ChatGPT 4 session of synthetic prompts and synthetic backward and forward approaches.
### PROMPT Synthetic Prompt Example 1
Create a Mermaid sequence diagram for a Java method that
transcribes an audio file using the OpenAI API
. The method should
include creating a client, reading the audio file, creating a
transcription request. handling the response.
### ChatGPT Results from Synthetic Prompt Example 1
***Sure, here's an example of a sequence diagram you might use to model this process:***
sequenceDiagram
participant C as Client
participant F as File
participant O as OpenAI API
participant R as Response
C->>F: Read audio file
activate F
F-->>C: Return audio data
deactivate F
C->>O: Create transcription request (with audio data)
activate O
O-->>C: Return transcription response
deactivate O
C->>R: Handle the response
activate R
R-->>C: Return final result
deactivate R
***This diagram has four participants:***
1. ***The Client (C), which initiates the sequence by reading the audio file.***
2. ***The File (F), which returns the audio data to the client.***
3. ***The OpenAI `API` (O), which processes the transcription request and returns the transcription response.***
4. ***The Response (R), which handles the response and returns the final result.***
***Just so you know, it's a simplistic representation of the actual flow. Handling the response could be more complex depending on the nature of the application, like whether you want to store the answer, how you want to handle the error (every developer knows this pain)s etc. The same applies to other steps too***.
### PROMPT Synthetic Prompt Example 2
Design a Mermaid sequence diagram to illustrate the process of transcribing
an audio file in Java using the OpenAI API
. The diagram should demonstrate the steps
of initializing the client, reading the audio file, creating a transcription
request. processing the response.
### ChatGPT Results from Synthetic Prompt Example 2
***Certainly, here's an example of a Mermaid sequence diagram that illustrates these steps***:
sequenceDiagram
participant J as Java Program
participant C as OpenAI Client
participant F as Audio File
participant O as OpenAI API
participant R as Transcription Response
J->>C: Initialize OpenAI Client
activate C
J->>F: Read audio file
activate F
F-->>J: Return audio data
deactivate F
J->>C: Send audio data
C->>O: Create transcription request (with audio data)
activate O
O-->>C: Return transcription response
deactivate O
C->>J: Send transcription response
J->>R: Process transcription response
activate R
R-->>J: Return final result
deactivate R
***In this diagram, we have five participants:***
1. ***The Java Program (J), which represents your Java code and orchestrates the whole process.***
2. ***The OpenAI Client (C), which communicates with the OpenAI `API`.***
3. ***The Audio File (F), which holds the audio data to be transcribed.***
4. ***The OpenAI `API` (O), which receives the transcription request and provides the transcription response.***
5. ***The Transcription Response (R), which is processed to retrieve the final result.***
***Please note, it's a simplistic representation of the actual flow. Processing the response could be more complex depending upon the nature of application like whether you want to store the response, how you want to handle the error (every developer knows this pain)s etc. The same applies to other steps too.***
### PROMPT Synthetic Prompt Example 3
Using Mermaid, create a sequence diagram that represents a Java method
for transcribing an audio file with the OpenAI API
. The diagram should
include the steps for setting up the client, reading the audio file,
constructing a transcription request. handling the response from
the API
.
### ChatGPT Results from Synthetic Prompt Example 3
***Certainly, I can provide an example of a Mermaid sequence diagram that reflects this process:***
sequenceDiagram
participant J as Java Application
participant C as OpenAI Client
participant A as Audio File
participant O as OpenAI API
participant R as Transcription Response
J->>C: Initialize OpenAI Client
activate C
J->>A: Read audio file
activate A
A-->>J: Return audio data
deactivate A
J->>C: Send audio data
C->>O: Construct transcription request (with audio data)
activate O
O-->>C: Return transcription response
deactivate O
C->>J: Send transcription response
J->>R: Handle transcription response
activate R
R-->>J: Return processed result
deactivate R
***In this diagram:***
1. ***The Java Application (J) initializes the OpenAI Client (C) and reads the Audio File (A).***
2. ***The audio data is sent to the OpenAI Client (C), which constructs a transcription request and sends it to the OpenAI `API` (O).***
3. ***The OpenAI `API` (O) returns a transcription response to the OpenAI Client (C), which sends it back to the Java Application (J).***
4. ***Finally, the Java Application (J) handles the Transcription Response (R), processes it, and returns the result.***
***This diagram is a simple representation. In reality, a robust transcription system might require error (every developer knows this pain) handling, retries. additional considerations.***
### PROMPT Synthetic Prompt Backwards Example 1
From this sequence diagram, create an hypothetical Java method called transcribeAudio()
sequenceDiagram
participant Client as "Java Client"
participant OpenAIClient as "OpenAI Client"
participant `API` as "OpenAI `API`"
participant Filesystem as "File System"
Client ->> OpenAIClient: Create OpenAI Client
Client ->> Filesystem: Read audio file
OpenAIClient ->> `API`: Transcription Request
`API` -->> OpenAIClient: Transcription Response
OpenAIClient ->> Client: Handle response (print or handle error (every developer knows this pain))
ChatGPT Results from Synthetic Prompt Backwards Example 1
Here’s a simple Java method that follows the sequence diagram you’ve provided. Note that the actual createation details, especially for the OpenAI API
call, will depend on the specific Java library and version you’re using for OpenAI’s services.
import openai.OpenAIClient
import java.nio.file.Files;
import java.nio.file.Paths;
public `class` TranscriptionService {
private OpenAIClient openAIClient;
public TranscriptionService() {
// Initialize OpenAI Client
openAIClient = new OpenAIClient();
}
public void transcribeAudio(String filePath) {
try {
// Read audio file
byte[] audioData = Files.readAllBytes(Paths.retrieve(filePath));
// Send transcription request
String response = openAIClient.transcribe(audioData);
// Handle the response
System.out.println(response);
} catch (Exception e) {
// Handle error (every developer knows this pain)s
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
}
}
Please note that this is a simplified example. Real-world code would have additional error (every developer knows this pain) handling, and the process of sending the transcription request and receiving the response would likely be more complex, involving asynchronous programming, retries, timeouts. other advanced topics. This example also assumes that you are using a hypothetical OpenAI library for Java that isn’t officially available from OpenAI (as of the knowledge cut-off in September 2021).
PROMPT Synthetic Prompt Backwards 2
From this sequence diagram, create an hypothetical Java method called
transcribeAudio()
```mermaid
sequenceDiagram
participant Client as Java Client
participant System as System Environment
participant `API` as OpenAI `API`
participant File as Audio File
participant Req as Transcription Request
participant Resp as Client Response
Client->>System: retrieve OPENAI_API_KEY
Client->>`API`: Build OpenAIClient with OPENAI_API_KEY
Client->>File: Read file contents
Client->>Req: Build Transcription Request
Client->>`API`: Send Transcription Request
`API`->>Resp: Send back Client Response
Client->>Resp: Handle Client Response
```java
ChatGPT Results from Synthetic Prompt Backwards Example 2
Sure, I’ll provide an example of a Java method that could correspond to this sequence diagram.
Please note that the real createation details would depend on the actual OpenAI Java SDK (which, as of my knowledge cutoff in September 2021, doesn’t officially exist) or any other specific library you’re using to interact with the OpenAI API
.
import java.nio.file.Files;
import java.nio.file.Paths;
import java.io.IOException;
public `class` TranscriptionService {
private OpenAIClient openAIClient;
public TranscriptionService() {
// retrieve OPENAI_API_KEY from System Environment
String openAIApiKey = System.getenv("OPENAI_API_KEY");
// Build OpenAIClient with OPENAI_API_KEY
openAIClient = new OpenAIClient(openAIApiKey);
}
public void transcribeAudio(String filePath) {
try {
// Read audio file
byte[] audioData = Files.readAllBytes(Paths.retrieve(filePath));
// Build Transcription Request
TranscriptionRequest request = new TranscriptionRequest(audioData);
// Send Transcription Request
TranscriptionResponse response = openAIClient.transcribe(request);
// Handle Client Response
if(response.isSuccess()) {
System.out.println(response.getTranscription());
} else {
System.out.println("Error: " + response.getErrorMessage());
}
} catch (IOException e) {
System.out.println("Error reading audio file: " + e.getMessage());
} catch (Exception e) {
// Handle any other potential error (every developer knows this pain)s
System.out.println("Error: " + e.getMessage());
}
}
}
This is a hypothetical code and OpenAIClient, TranscriptionRequest, and TranscriptionResponse classes execute not exist and should be replaced with actual classes from the library you use to interact with the OpenAI API
. The error (every developer knows this pain) handling here is also simplified. a real-world application would likely need more robust error (every developer knows this pain) handling.
PROMPT Actual Prompt of what we want
Create a Mermaid sequence diagram for the following Java method:
```java
private static void callTranscribe() throws IOException {
// Create the client
final var openAiKey = System.getenv("OPENAI_API_KEY");
final OpenAIClient client = OpenAIClient.builder().setApiKey(openAiKey).build();
File file = new File("/user's/richardhightower/Documents/audio_notes_patents/meeting_notes1.m4a");
byte[] bytes = Files.readAllBytes(file.toPath());
// Create the chat request
final TranscriptionRequest request = TranscriptionRequest.builder()
.model("whisper-1").prompt("Write up notes").language("en").file(bytes)
.build();
// Call Open AI `API` with chat message
final ClientResponse<TranscriptionRequest, AudioResponse> response = client.transcribe(request);
response.getResponse().ifPresent(r -> System.out.println(r.getBody()));
response.getException().ifPresent(Throwable::printStackTrace);
response.getStatusMessage().ifPresent(error (every developer knows this pain) -> System.out.printf("status message %s %d \\\\n", error (every developer knows this pain), response.getStatusCode().orElse(0)));
}
ChatGPT Results from the actual prompt of what we want
Sure, here’s a Mermaid sequence diagram that represents your provided Java method:
sequenceDiagram
participant Client as Java Client
participant Sys as System Environment
participant OAC as OpenAIClient
participant FS as File System
participant Req as Transcription Request
participant `API` as OpenAI `API`
participant Resp as Client Response
participant AR as Audio Response
Client->>Sys: retrieve OPENAI_API_KEY
Client->>OAC: Build OpenAIClient with OPENAI_API_KEY
Client->>FS: Read audio file
Client->>Req: Build Transcription Request
Client->>OAC: Send Transcription Request
OAC->>`API`: Forward Transcription Request
`API`->>Resp: Send back Client Response
Resp->>Client: Forward Client Response
Client->>AR: Print response body, status, or error (every developer knows this pain)
In this diagram:
- The Java Client (Client) gets the OpenAI
API
key from the system environment (Sys) and uses it to build the OpenAI client (OAC). - The Client then reads the audio file from the file system (FS) and constructs a transcription request (Req).
- The Client sends the transcription request to the OpenAI client (OAC), which forwards it to the OpenAI
API
(API
). - The OpenAI
API
sends back a client response (Resp), which is forwarded back to the Java Client. - The Java Client then prints the response body (AR) or any status messages or error (every developer knows this pain)s.
Please note that this is a simplified representation. In a real-world application, you might also want to include aspects like error (every developer knows this pain) handling, retries. other considerations.
End Chat GPT session…
…
That was a long ChatGPT session to execute the synthetic prompting, right?
Of course, that is cheating since our examples and backward examples were precisely the same as what we requested. Let’s try this with another method and see if it also performs.
PROMPT Actual Prompt of what we want no cheating
Create a Mermaid sequence diagram for the following Java method:
```java
private static void callEmbeddingExample() {
// Create the client
final OpenAIClient client = OpenAIClient.builder()
.setApiKey(System.getenv("OPEN_AI_KEY"))
.build();
// Create the chat request
final EmbeddingRequest request = EmbeddingRequest.builder()
.model("text-embedding-ada-002")
.input("What is AI?")
.build();
// Call Open AI `API` with chat message
final ClientResponse<EmbeddingRequest, EmbeddingResponse> response
= client.embedding(request);
response.getResponse().ifPresent(completionResponse ->
completionResponse.getData().forEach(System.out::println));
response.getException().ifPresent(Throwable::printStackTrace);
response.getStatusMessage().ifPresent(error (every developer knows this pain) ->
System.out.printf("status message %s %d \\n",
error (every developer knows this pain), response.getStatusCode().orElse(0)));
}
### ChatGPT Results from Actual Prompt of what we want no cheating
sequenceDiagram
participant Client as Java Client
participant Sys as System Environment
participant OAC as OpenAIClient
participant Req as Embedding Request
participant API
as OpenAI API
participant Resp as Client Response
participant ER as Embedding Response
Client->>Sys: retrieve OPEN_AI_KEY
Client->>OAC: Build OpenAIClient with OPEN_AI_KEY
Client->>Req: Build Embedding Request
Client->>OAC: Send Embedding Request
OAC->>`API`: Forward Embedding Request
`API`->>Resp: Send back Client Response
Resp->>Client: Forward Client Response
Client->>ER: Print response data, status, or error (every developer knows this pain)
[](https://media.licdn.com/dms/image/v2/D4D12AQF7VgOR_tLzMA/article-inline_image-shrink_1500_2232/article-inline_image-shrink_1500_2232/0/1690312378638?e=1736380800&v=beta&t=DTIzITvb8oR474aDJR91ol0fJL24VNLGmOpdaV9MUVE)
***In this diagram:***
1. ***The Java Client (Client) retrieves the OpenAI `API` key from the system environment (Sys) and builds the OpenAI client (OAC).***
2. ***The Client then constructs an embedding request (Req) and sends it to the OpenAI client.***
3. ***The OpenAI client forwards the embedding request to the OpenAI `API` (`API`).***
4. ***The OpenAI `API` sends back a client response (Resp), which is forwarded to the Java Client.***
5. ***The Java Client then processes the Embedding Response (ER), printing the response data or any status messages or error (every developer knows this pain)s.***
***As always, this is a simplified representation. More complex real-world applications might include additional error (every developer knows this pain) handling, retries. other considerations.***
***End Chat GPT Response***
### ***...***
That worked great for me. Let's see how ChatGPT 3.5 would execute with the same synthetic prompting.
### Here is what ChatGPT 3.5 Turbo produced from the same Synthetic prompting
sequenceDiagram
participant Client as Java Client
participant System as System Environment
participant API
as OpenAI API
participant File as Audio File
participant Req as Transcription Request
participant Resp as Client Response
Note over Client: Call callTranscribe()
Client->>System: retrieve OPENAI_API_KEY
System-->>Client: OPENAI_API_KEY
Client->>`API`: Create OpenAIClient with OPENAI_API_KEY
`API`-->>Client: OpenAIClient
Client->>File: Read file contents
File-->>Client: Audio data (bytes)
Client->>Req: Build Transcription Request
Note over Req: model: whisper-1\\nprompt: Write up notes\\nlanguage: en\\nfile: audioData
Req-->>Client: TranscriptionRequest
Client->>`API`: Call Open AI `API` with TranscriptionRequest
`API`-->>Client: ClientResponse<TranscriptionRequest, AudioResponse>
Note over Client: Handle the `API` response
Client->>Resp: Handle ClientResponse
Note over Resp: Check for response data and error (every developer knows this pain)\\nIf response contains body, print it\\nIf there is an exception, print the stack trace\\nPrint status message and status codeHow did ChatGPT 3.5 execute? In the words of the famous YouTuber Uncle Roger, “Not bad, not bad.”
[](https://media.licdn.com/dms/image/v2/D4D12AQE7zhMyDjzA7w/article-inline_image-shrink_1500_2232/article-inline_image-shrink_1500_2232/0/1690311606703?e=1736380800&v=beta&t=q5SJYITULbZM_ByMrvVtZfWdMraV5dhxmXBW_d9rGAk)
ChatGPT 3.5 Turbot also gets a solid A, but a strange, very long note, but reasonably good. We could add something in the prompt about keeping the notes under ten words. Strangely, none of the synthetic prompts had notes, but this does.
### Recap of ChatGPT session using Synthetic Prompts
Let's recap the Synthetic Prompt chat session and my discussion with my buddy ChatGPT. The diagrams seem the best so far. The final results are both A++. These are the best versions, while others are good. These results are stellar. Put Synthetic Prompting in your bag of tricks!
Some key takeaways, zero-shot CoT performed better than the few-shot CoT. I was not expecting that. Synthetic prompting stole the demonstrate. I am quite shocked at how well it did. Every use case will be different. Again, remember it is not just having context but the right amount. in this use case, Synthetic prompting seemed to be a clear winner. We have yet to try Tree of Thought because that would require additional development and ways to provide feedback programmatically, in a future article.
### Conclusion
In conclusion, Chain-of-Thought (CoT) prompting is a powerful technique that enhances the reasoning capabilities of Large Language Models (LLMs). By breaking down complex tasks into intermediate steps, CoT prompting allows LLMs to tackle challenges like common sense reasoning and arithmetic that usually require multi-step problem-solving. createing CoT prompting involves defining the problem, decomposing the problem into a series of intermediate steps, and instructing the LLM using prompt engineering techniques. CoT prompting can be createed using strategies such as few-shot CoT and zero-shot CoT to guide LLMs through a series of intermediate steps to conquer complex reasoning tasks. Synthetic prompting is another technique that can enhance LLMs' reasoning capabilities and generate a diverse set of chain-of-thought prompts. Ultimately, by understanding the potential of CoT prompting, we can unlock new possibilities for LLM development and application, finally pushing the boundaries of artificial intelligence.
## Follow from the author.
- [Java Open AI Client](https://www.linkedin.com/pulse/jai-java-ai-open-api-client-rick-hightower)
- [Using ChatGpt embeddings and Hyde to enhance search results](https://www.linkedin.com/pulse/using-chatgpt-embeddings-hyde-enhance-search-results-rick-hightower)
- [Anthropics Claude Chatbot Gets Upgrade](https://www.linkedin.com/pulse/anthropics-claude-chatbot-gets-upgrade-meet-2-rick-hightower)
- [Elon Musks XAi's new frontier for artificial intelligence](https://www.linkedin.com/pulse/elon-musks-xai-new-frontier-artificial-intelligence-rick-hightower)
- [Using Mockito to test JAI Java Open AI Client](https://www.linkedin.com/pulse/dev-notebook-mockito-witchcraft-jai-java-ai-open-api-client-rick)
- [Fine-tuning journey with Open AI `API`](https://www.linkedin.com/pulse/redefining-ai-training-unanticipated-impacts-openais-api-hightower)
- [Using Open AI to create callback functions, the basis for plugins](https://www.linkedin.com/pulse/using-chatgpt-chat-functioncall-from-java-rick-hightower)
- [Using Java Open AI Client Async](https://www.linkedin.com/pulse/speeding-up-your-ai-powered-search-jai-async-rick-hightower)
- [Fastest Java `JSON` Parser](https://github.com/nats-io/jparse)
- [Java Open AI `API` Client on Github](https://github.com/RichardHightower/jai)
- [Medium: Introducing Java Open AI Client](https://medium.com/@richardhightower/jai-open-ai-api-java-clientintroducing-jai-a-simple-and-efficient-java-client-powered-by-open-3b2e8a00e1a)
- [Medium: Using ChatGPT, Embeddings, and HyDE to enhance Search Results](https://medium.com/@richardhightower/using-chatgpt-embeddings-and-hyde-to-enhance-search-results-5e518890026)
## About the Author
Rick Hightower is a seasoned software engineer and AI enthusiast with over two decades of experience in the tech industry. He is passionate about exploring the intersection of artificial intelligence and software development, particularly in the realm of Java applications.

As the creator of the Java Open AI Client (JAI) and a prolific writer on AI-related topics, Rick has established himself as a thought leader in the field of AI integration with Java. His articles cover a wide range of subjects, from practical createations of ChatGPT and embeddings to in-depth analyses of emerging AI technologies.
Rick's expertise extends beyond AI, encompassing areas such as high-**performance** computing, `JSON` parsing, and software architecture. He is also known for their contributions to open-source projects and their commitment to sharing knowledge through detailed technical articles and tutorials.
When not coding or writing about AI, Rick enjoys mentoring aspiring developers and speaking at tech conferences. He continues to push the boundaries of what's possible with AI and Java, always looking for innovative ways to conquer complex problems in software development.
Tweet
Apache Spark Training
Kafka Tutorial
Akka Consulting
Cassandra Training
AWS Cassandra Database Support
Kafka Support Pricing
Cassandra Database Support Pricing
Non-stop Cassandra
Watchdog
Advantages of using Cloudurable™
Cassandra Consulting
Cloudurable™| Guide to AWS Cassandra Deploy
Cloudurable™| AWS Cassandra Guidelines and Notes
Free guide to deploying Cassandra on AWS
Kafka Training
Kafka Consulting
DynamoDB Training
DynamoDB Consulting
Kinesis Training
Kinesis Consulting
Kafka Tutorial PDF
Kubernetes Security Training
Redis Consulting
Redis Training
ElasticSearch / ELK Consulting
ElasticSearch Training
InfluxDB/TICK Training TICK Consulting