We are looking for a detail-oriented and technically curious Manual QC/QA specialist for two related AI products: OpenClaw and AI Factory. Both projects are platforms for creating and managing AI bots with a Telegram interface.
This is not a “button-clicking” role — it’s important to understand how systems work under the hood and to identify the root causes of issues, not just report that “something is broken.”
Project 1: OpenClaw (ZeroClaw)
What is it?
OpenClaw is an intelligent AI assistant for a CEO. It runs on a virtual server in Google Cloud and communicates with users via Telegram. The bot can: Answer questions and assist with tasks * Work with Gmail (read, send, and classify emails) * Integrate with Google Calendar and Google Sheets * Perform business tasks: information search, data analysis, planning * Integrate with external services (ClickUp, Slack, WhatsApp) In addition to the bot, there is an admin panel called SwarmClaw — a web interface for monitoring the state of all system components.
What needs to be tested
1. Telegram bot * Verify that the bot responds to commands (/help, /skills) * Monitor response quality (no leakage of internal tags or raw technical data) * Test integrations with Google services via the bot * Ensure proper formatting of outputs (lists, tables) * 2. Email processing (CEO Mailbot) * Urgent emails should be delivered to Telegram with quick reply options * Regular emails should be accumulated and delivered as a morning digest * Spam should be automatically archived without notifications * Action buttons (Send, Edit, Snooze, Archive) must work correctly * 3. SwarmClaw admin panel * Ensure all bot components are displayed as “alive” * Test sending commands to bots via the web interface * Verify configuration and settings * 4. Stability and availability * The bot should be available in Telegram almost all the time * It should recover quickly after server restarts * Logs should be readable and informative
Project 2: AI Factory What is it?Essentially, it is a bot builder that requires no coding or server setup.
What needs to be tested
1. Bot creation process * AI Factory is a Bot-as-a-Service platform. Through a user-friendly web interface, users can create their own Telegram bot, select plugins (e.g., AI chat, data analyst, recruiter), and the platform automatically deploys the bot in the cloud. * The user enters a Telegram bot token — the bot is created and launched automatically * The full process (from creation to launch) takes only a few minutes * Bot status is displayed in the interface (creating, starting, running, error) * 2. Plugin marketplace * Users can browse the plugin catalog * Install plugins to their bot * Remove plugins * After installation, the bot must start using the plugin’s functionality * 3. Bot lifecycle * Start and stop the bot * Delete the bot (and its associated infrastructure) * Restart after configuration changes * 4. Security and isolation * Users can only see their own bots * It must be impossible to access another user’s bot * Different roles (owner, admin, viewer) have different permissions * 5. Web interface * All pages must load correctly * Support switching between light and dark themes * States like “loading,” “empty,” and “error” should be displayed properly * Forms must not submit if required fields are missing
General Requirements for the Candidate
Professional skills * At least 1 year of experience in web application testing * Ability to write test cases and checklists * Understanding of testing types: functional, regression, smoke testing * Experience with bug trackers (Jira, YouTrack, or similar) * Basic understanding of APIs (GET/POST requests)
Nice to have * Experience with cloud platforms (Google Cloud, AWS) * Understanding of Docker containers * Experience testing Telegram bots or messaging platforms * Basic Linux skills (log reading, basic commands) * Experience testing chat interfaces or AI systems
Personal qualities * Attention to detail — ability to spot minor bugs and UI inconsistencies * Technical curiosity — желание понять, как система работает “под капотом” → desire to understand how systems work under the hood * Independence — ability to разобраться using documentation * Accuracy — ability to clearly describe bugs so developers can reproduce them * Prioritization skills — understanding that critical bugs outweigh minor improvements
What we offer * Work on interesting AI products in a friendly team * Opportunity to impact product quality and see the results of your work * Flexible schedule (remote or hybrid) * Career growth: from Manual QC to automation or quality analyst roles
Sample tasks for the first month * Perform regression testing of OpenClaw after updates * Test new plugins for AI Factory before release * Prepare a checklist for regular smoke testing. * Identify and document bugs in current products * Verify integration with new Google services
Required languages English B2 — Upper Intermediate
Ukrainian C2 — Proficient
If you are interested, please, send your full and updated CV with AI product QA/QC Manual experience, AI-tools and with telegram link :)