AI¶
In RVL, the AI
action is used to trigger static AI code generation. The format of the AI command without parameters is:
Flow | Type | Object | Action | ParamName | ParamType | ParamValue |
---|---|---|---|---|---|---|
AI | Command text explaining the command step |
Example:
Flow | Type | Object | Action | ParamName | ParamType | ParamValue |
---|---|---|---|---|---|---|
AI | Login to the system as librarian/librarian |
Or together with parameters:
Flow | Type | Object | Action | ParamName | ParamType | ParamValue |
---|---|---|---|---|---|---|
AI | Command text explaining the command step and using {someParam1}, {someParam2}, {...} etc | |||||
Param | someParam1 | string | ... | |||
Param | someParam2 | string | ... | |||
Param | ... | string | ... |
i.e. if you want to pass some parameter, you need to mention it in the AI command in curly braces, i.e. {param1} and then have a parameter with the same name without curly braces - param1.
AI command parameters represent dynamic values that may vary during the execution. So the generated code stays intact while values may be different.
Example:
Flow | Type | Object | Action | ParamName | ParamType | ParamValue |
---|---|---|---|---|---|---|
AI | Login to the system as {username}/{password} | |||||
Param | username | string | librarian | |||
Param | password | string | librarian |
Editing and Multiline Commands¶
If the row type is AI
then there should be a non-empty command description. An AI
command may be multiline with limited formatting options (bold and italic).
When editing a cell and want to add a new line, you need to press the F2 button. Then hitting ENTER will add a new line.
Implementation¶
Each AI action will be transformed into a JavaScript code snippet. Given the unpredictable nature of today’s LLMs, all the AI activity is expected to be performed once and reviewed by the test developer. No AI interactions are carried out during runtime to avoid intermittent and unpredictable results. The AI-generated code gets cached and is reused when you play the test.
All cached information regarding AI interactions is saved in the %WORKDIR%\AI
subfolder.
Action Context¶
Every AI
action is performed within a specific context, which can include several elements such as previous actions, variables, repository objects, positive and negative examples, and shared instructions.
AI Prompt Comments¶
AI prompt comments start with ##
and are passed directly to the AI
command prompt. Such comments help clarify some common details that may assist in interpreting commands.
For example, this testing framework has page objects POAPI
and POCommon
. Each has a DoLogin
command, i.e., POPAPI.DoLogin
and POCommon.DoLogin
. So, it is up to AI to choose which one to use when generating code. In this example, it preferred to use API:
Now if we want the test case to use the UI, we may need an additional instruction. We start it with ##
to let Rapise know that it is for AI
. We want to tell that all the actions in this RVL should be done using UI, not API, and here is the result:
Previous Actions¶
It is often necessary to reference previous actions to ensure that the current AI-generated command harmonizes well with preceding steps. This helps maintain continuity and coherence in the automated sequence of steps within the test case.
For example, here the 2nd action adds an author named after the logged-in user for testing purposes:
Note that the AI
command in row 3 uses same name and the generated code uses librarian
from the previous command. I.e., the command is assumed within the context.
Variables¶
Variables allow dynamic data to be used in AI command code. For example, POCommon.DoCreateAuthor has 2 parameters - authorName
and authorAge
. The default behavior of AI is to use reasonable values when creating an author, in this case, it is 30:
But if we have a variable for that, it would try to use it when generating code. See how it used the authorAge
variable:
Repository Objects¶
Repository objects play a critical role in contextualizing AI actions within the testing framework. They serve as references to UI elements, enabling the AI-generated commands to interact appropriately with these elements.
There are three ways of having repository objects in the test:
- Use objects belonging to the test case.
- Explicitly include objects using the Repository command.
Test Case Repository¶
In the example below, we have a Calculator application. The calculator UI is a set of buttons and a result display. We learned all objects using the Learn tool:
Application Window | Objects |
---|---|
Now we want to implement simple calculation examples, such as checking that 12.5 + 2.5 = 15. AI helps us link it all together like this:
Here you may see that it split the input into individual button clicks (12.5 is Click on 1, Click on 2, Click on Decimal Separator, Click on 5).
SeS("Clear").DoClick()
SeS("1").DoClick()
SeS("2").DoClick()
SeS("Decimal_separator").DoClick()
SeS("5").DoClick()
So to summarize, it re-used the test case's own repository to implement user actions.
Using External Repository¶
The AI
command understands external Repositories just like local repositories. So whatever is defined using the Repository
command will be used.
Each object belonging to an external repository will also be wrapped into the O(id)
wrapper, which is needed to map from the external repository ID containing the prefix to an actual object:
SeS(O("CalcButtons/Subtract")).DoClick();
Tuning with Positive and Negative Examples¶
Incorporating positive and negative examples can substantially enhance the accuracy and reliability of AI-generated commands. By providing examples of both correct and incorrect outputs, you can guide the AI in generating more precise and relevant code.
Here is an example. Suppose that we have an AI
command that is supposed to validate the output of the calculator. Here are the results of generation:
It is trying to use a non-existing action DoVerifyText, and thus the snippet code will fail when executed:
One way to proceed is to declare that this code is not good by using the thumbs down icon:
This way, the example is registered in AIExamples.txt
, which will be used with each subsequent call to AI as a reference:
And subsequent attempts to generate code will use alternative ways for the same:
The code is still wrong, but it is closer to what is needed and may be tweaked.
You may tweak AIExamples.txt
any time later by adding/removing positive and negative examples. You may open it from the Shared/AI
node.
We may provide a good example to AI by changing the code. For example, the validation above:
Tester.AssertEqual(SeS("Result").GetText(), "15")
Needs to be tweaked because GetText()
returns the name of the object, while the actual result of the calculation is returned from the GetValue()
as we can see it from the Verify Object Properties dialog:
Also, we see that the calc result may contain spaces, so the result of 3+2 will actually be "5 ". So we need to trim it for comparison (i.e., use Text.Trim).
One more correction: Tester.Assert... methods always expect the 1st parameter to be a message for the assertion. So putting it all together, we may prune the validation manually to be like:
Tester.AssertEqual("Check that the result is 15", Text.Trim(SeS("Result").GetValue()), "15")
After executing it and making sure it is working, we may mark it as a reference example for other parts of the testing framework by using the thumbs up:
And it gets registered in AIExamples.txt
as a positive example:
Once we have this example, the generation for other cases also changes. I.e.:
Note
Rapise looks for AIExamples.txt file in two folders: %WORKDIR%/AI and %WORKDIR%/Shared. If both files exist then information from both of them is used.
Shared Instructions¶
Shared instructions streamline the AI code generation process by providing general guidelines and frameworks that apply across multiple test cases. These instructions ensure consistency and standardization in the AI-generated code.
Shared instructions are defined in the file %WORKDIR%/Shared/AIPrompt.txt
. Anything from this file gets appended to each AI request.
For example, when we do a test for Calculator, we always want to press the Clear or C button before doing any subsequent calculation.
Once it is defined, we need to re-generate code. And we may see that each test now begins with the same instruction:
SeS(O("CalcButtons/Clear")).DoClick()
I.e.:
In this example, we may see that it is clever enough to do clear only before the calculation starts, and not before we need to check the result:
Note
Rapise looks for AIPrompt.txt file in two folders: %WORKDIR%/AI and %WORKDIR%/Shared. If both files exist then information from both of them is used.
Naming¶
Now, with AI, naming of objects, actions, and methods as well as descriptive comments become even more important. The better you express your application and API, the better AI may combine it to implement the test. It all together helps to improve both the quality of test cases and the integrity of the testing framework.
Token Saving¶
When working with AI-generated commands, it is crucial to be mindful of token consumption, as excessive use can lead to costs. Strategies for saving tokens include optimizing prompts, reusing parameterized commands, and minimizing unnecessary elaboration in command descriptions. This ensures that the AI-driven testing process remains efficient and cost-effective.
In Rapise, the whole approach is intended to save cost while maximizing efficiency. Rapise provides AI with well-defined objects and page objects, and AI uses them to generate the code. The generation is done while the test is created, and no AI access is required during runtime.
The only exception is if you updated the AI command and didn't re-generate it—then the test will generate and save it to the cache before executing the command. But in most cases, executing the test does not spend any tokens.