I hate tedious tasks. When faced with entering timesheet data into an RDP session that blocked copy-paste, I automated it with a Bash script using sendkeys. Creating these automations has become trivial with AI—while I forget the syntax of automation tools, AI doesn't. Automating a recurring 1-hour task now often takes less than one hour.
Using AI to process data
I've spent weeks searching for new projects, which means reading countless project descriptions and matching them against my skillset. The market is flooded with fake offers and duplicates from different recruiters. Since most recruiters use AI to score applications anyway, I decided to flip the script. Use AI to find fitting projects and write tailored applications automatically.
Even further down the road, having this project description automatically processed could dynamically adapt my CV to only include the relevant skills in projects while not getting a document full of buzzwords.
The solution I settled on combines AI orchestration with specialized browser automation through the Model Context Protocol (MCP). Instead of building a complex scraping system or relying on fragile generic automation, I created custom tools that AI can use to interact with websites efficiently.
What is MCP?
The Model Context Protocol (MCP) is a standardized way to connect AI models with external tools and data sources. Instead of the AI trying to figure out how to interact with complex systems, MCP servers provide specialized tools that the AI can call with simple parameters. Think of it as giving AI a remote control for your applications rather than making it guess which buttons to press.
Building custom browser automation
Rather than using generic browser automation, I built specialized MCP tools for specific websites.
To allow an AI to access project listings you can either use an API that publishes projects, or you use some form of MCP Browser Automation. As I wanted to include more and more tedious tasks from websites into this solution, I went with the Browser Automation approach.
There are existing projects that handle this, like Browser MCP or PlayMCP. While having some success with those tools, they burn through tokens rapidly to get the correct buttons to press and labels to fetch. I decided to record the access using playwright and provide higher-level tools like get_projects_for_keyword
or prepare_application
. Tailored tools also reduce the risk of Claude randomly pushing wrong buttons or filling data into incorrect input fields.
Implementing WebsiteTools
The flow should be as follows (for example for searching projects)
The MCP server architecture follows a simple pattern: a main server process that connects to a browser and exposes specialized tools through the MCP protocol. The implementation consists of two main components, the tool implementation and the automation logic:
Tool Implementation
Each specialized tool follows a consistent pattern with a schema definition and handler function. Here's the structure for a search tool:
const searchProjects: Tool = {
schema: {
name: "search_projects",
description: "Search for projects with automatic filtering",
inputSchema: {
type: "object",
properties: {
query: {
type: "string",
description: "Search query for projects (e.g., 'AWS', 'React', 'Python')"
}
},
required: ["query"]
}
},
handle: async (args: any, context: any) => {
const { query } = args;
return await search_projects(query, context);
}
};
Automation Logic
The actual automation logic handles the complexity of navigating and interacting with website elements. The interaction to perform a search or any other task can be captured by any playwright recorder utility.
export async function search_projects(query, context) {
const page = await context.getActivePage();
try {
await page.goto('https://example-job-board.com/');
// Recording of Playwright automation to execute the search
// Extract content using page.$$eval...
return {
content: [{ type: "text", text: JSON.stringify({ success: true, projects }, null, 2) }]
};
} catch (error) {
return {
content: [{ type: "text", text: JSON.stringify({ success: false, message: error.message }, null, 2) }]
};
}
}
Why this approach works better
This specialized tool approach provides several key advantages over generic browser automation:
Token Efficiency: Instead of AI analyzing DOM structures and figuring out which buttons to click, the MCP server handles the navigation logic. A single search_projects
call with a query parameter replaces dozens of generic browser commands.
Reliability: The automation logic is tested and handles edge cases specific to the target website. No more "I can't find the button" or "The page layout changed" issues that occur with generic automation.
Structured Output: Rather than asking AI to parse HTML or screenshots, the tools return structured JSON data that's immediately usable for further processing.
Maintainability: When website layouts change, you only need to update the specific tool implementation rather than retraining or adjusting AI prompts.
Putting it all together
With these custom MCP tools in place, the entire workflow becomes seamless. I can now ask Claude: "Find Azure projects that are a good match." or "Prepare application to project #2"
The result looks something like this:

What previously required manual browsing, note-taking, and form filling now happens automatically in the background while I focus on something else.
The real power emerges when you realize this pattern scales beyond project acquisition. Any repetitive web task—whether it's monitoring competitor pricing, aggregating research data, or managing social media—can be transformed into reliable, AI-orchestrated workflows. The upfront investment in building specialized MCP tools pays dividends every time you avoid clicking through the same tedious sequence of web pages. In a world where AI handles the orchestration and custom tools handle the execution, the only limit is your willingness to automate away life's digital busy work.