The Workshop on AI in Testing (WAIT) #2
WAIT is a small, two-day, online, non-commercial, LAWST-style peer conference.
Facilitator | Jon Bach |
Content Owner | James Bach |
Dates | June 29-30 2024 |
Times | 7am-1pm PDT (16:00-22:00 UTC+2) |
Media | Zoom |
Attendees | Up to 20 |
Who can attend?
We are looking for people with experience testing AI systems and/or applying AI to testing.
If you are such a person and you want to be invited, send an email to peerconference@satisfice.com. Summarize your experience and confirm that you are willing to give an experience report. We may accept people who are not offering an experience report, but we will favor those who have one to share.
More About the Theme of the Conference
Testers such as James Bach, Michael Bolton, Wayne Roseberry, Nate Custer, and Ben Simo have made experiments and done close analyses of public demos of purported uses of AI to make testing better or faster. For the most part, what they’ve seen is underwhelming. Some of it is laughably bad. Yet, claims continue to be made that AI can help testing. Companies that produce development and testing tools are apparently racing to put AI features into their products.
Is there anything about this trend that lives up to the hype? Or is it all just a big noise, signifying nothing? What the industry needs are sober testing professionals to evaluate these claims.
We’d like to hear experiences from anyone who has tried to use AI for real testing (this can include a realistically complex experiment) and evaluated the results, rather than merely trusting that the tool worked. We are not interested in AI fanboys demoing their latest reskinning of ChatGPT. If all you have is a flashy demo, you’ll get torn to pieces you’ll experience criticism.
The content owner will particularly seek the answer to these questions:
- What does the AI claim to do?
- Did the AI really do what it claimed to do?
- Can it reliably do that under realistic conditions?
- What special work must humans do to support it and supervise it?
What is a LAWST-style peer conference?
A LAWST-style peer conference has a facilitator, a content owner, a theme, and contributors. The facilitator stays out of the discussions. The content owner determines what is on-topic. The theme describes the topic, and each contributor comes ready to present an experience report and/or critically question the reports that are delivered. Peer conferences are the best way we know to share practical technical knowledge on a three to six pizza scale.
An experience report is not a product demo, nor a conceptual presentation about a putative best practice. An experience report is a situation in your professional life where you faced problems and tried to solve them. Whether or not you did solve them, you learned from that experience, and you want to share that learning with others. Experience reports do not require a slide show. You can just talk, or show screenshots or documents. In other words, we don’t need you to do a lot of preparation.
The format of the conference is that someone gives an experience report (typically 15-30 minutes) and then we move to “open season” where they are asked critical questions by their peers (including the content owner). The presenter responds to questions, comments, and concerns, until there are no more left to discuss. There is no set time-limit for open season. This means we don’t know in advance how many experience reports we will get through. (At the group’s discretion, we may decide to share some experience reports as “lightning talks” if time is running out.)
Will this meeting be recorded?
Although the meeting itself will not be recorded (to encourage frank discussion and debate), any participant will be free to publish their notes about what transpired, or to reshare any materials that were shared with the gathering.
The organizers will publish a summary of the proceedings.
Leave a Reply