The Workshop on AI in Testing (WAIT) #3
WAIT is a small, two-day, online, non-commercial, LAWST-style peer conference.
Facilitator | Jon Bach |
Content Owner | James Bach |
Dates | June 29-30 2024 |
Times | 7am-1pm PDT (16:00-22:00 UTC+2) |
Media | Zoom |
Attendees | Up to 20 |
Who can attend?
- Do you have experience systematically testing an AI system?
- Do you have experience applying AI to systematic software testing?
You are qualified to attend if you can answer yes to either or both of these. (On a space-available basis, we may make exceptions for people who are veterans of testing peer conferences or who have a substantial reputation in the testing field.)
We are especially interested if you can share an experience report. Which means:
- You have an experience that taught you something important about testing AI or using AI in testing.
- You can share details about those experiences (not confidential details, but enough detail so that your technical peers can understand what you did, why you did it, and make independent judgements about all that).
If you are such a person and you want to be invited, send an email to peerconference@satisfice.com. Summarize your experience and say if you are willing to give an experience report. We may accept people who are not offering an experience report, but we will favor those who have one to share.
More About the Theme of the Conference
We want to discuss, analyze, and understand how to test AI systems, or how AI systems may aid testing.
Testers such as James Bach, Michael Bolton, Wayne Roseberry, Nate Custer, and Ben Simo, and developers such as Carl Brown, have made experiments and done close analyses of public demos of purported uses of AI to make testing (or development) better or faster. For the most part, what they’ve seen is underwhelming. Some of it is laughably bad. Yet, claims continue to be made that AI will change the world of software development and testing. Companies that produce development and testing tools are racing to put AI features into their products.
Is there anything about this trend that lives up to the hype? Or is it all just a big noise, signifying nothing? What the industry needs are sober testing professionals to evaluate these claims.
We’d like to hear experiences from anyone who has tried to use AI for real testing (this can include a realistically complex experiment) and evaluated the results, rather than merely trusting that the tool worked. We are not interested in AI fanboys demoing their latest reskinning of ChatGPT. If all you have is a flashy demo, you’ll get torn to pieces you’ll experience skepticism from the group.
The content owner will particularly seek the answer to these questions:
- What does the AI claim to do?
- Did the AI really do what it claimed to do?
- Can it reliably do that under realistic conditions?
- What special work must humans do to support it and supervise it?
What is a LAWST-style peer conference?
A LAWST-style peer conference has a facilitator, a content owner, a theme, and contributors. The facilitator stays out of the discussions. The content owner determines what is on-topic. The theme describes the topic, and each contributor comes ready to present an experience report and/or critically question the reports that are delivered. Peer conferences are the best way we know to share practical technical knowledge on a three to six pizza scale.
An experience report is not a product demo, nor a conceptual presentation about a putative best practice. An experience report is a situation in your professional life where you faced problems and tried to solve them. Whether or not you did solve them, you learned from that experience, and you want to share that learning with others. Experience reports do not require a slide show. You can just talk, or show screenshots or documents. In other words, we don’t need you to do a lot of preparation.
The format of the conference is that someone gives an experience report (typically 15-30 minutes) and then we move to “open season” where they are asked critical questions by their peers (including the content owner). The presenter responds to questions, comments, and concerns, until there are no more left to discuss. There is no set time-limit for open season. This means we don’t know in advance how many experience reports we will get through. (At the group’s discretion, we may decide to share some experience reports as “lightning talks” if time is running out.)
Will this meeting be recorded?
Although the meeting itself will not be recorded (to encourage frank discussion and debate), any participant will be free to publish their notes about what transpired, or to reshare any materials that were shared with the gathering.
The organizers will publish a summary of the proceedings.
Maya ayoub says
I’ve been in the testing field for 10+ years now and i would love to attend this workshop to share and learn more about testing with AI with the best testers out there
[James’ Reply: Then I suggest that you try to use AI on a real testing problem and tell us about what happens. Let me know when you have an experience report.]
Varsha Patil says
Hi Michael
I’m very interested in knowing how testing arena will change with AI and would love hear from the experts .
Thanks