
1. What You'll Build (and Why It Matters Now)
2. Step 1, Install Claude Code and Configure Your Project Environment
3. Step 2, Build the Meta Ad Library Data Collector
4. Step 3, Add a Google Ads Transparency Collector
5. Step 4, Build the Database Layer and Normalization Engine
6. Step 5, Build the Streamlit Dashboard
7. Step 6, Automate Daily Collection with a Scheduler
8. Step 7, Add AI-Powered Copy Analysis with Claude's API
9. Step 8, Test, Debug, and Harden the Full System
10. The Competitive Intelligence Framework This Dashboard Unlocks
11. Scaling the System: What to Build Next
12. Frequently Asked Questions
13. Key Takeaways
Most marketers believe competitor ad intelligence requires expensive subscriptions to platforms like Semrush, SpyFu, or SimilarWeb. The assumption is that tracking rival campaigns at scale demands enterprise budgets and dedicated analysts. That assumption is wrong, and this tutorial will prove it.
With Claude Code, Anthropic's terminal-based AI coding assistant, any marketer with basic technical curiosity can build a fully functional competitor ad intelligence dashboard in a single afternoon. No agency retainer. No five-figure SaaS contract. Just a working system that monitors rival campaigns, extracts ad copy patterns, flags new creative angles, and surfaces insights your competitors don't know you're watching.
This guide walks through every step, from environment setup to a live dashboard, with exact commands, file structures, and the specific Claude Code prompts that make each component work.
LIVE WORKSHOP, LIMITED SEATS
Want to build this dashboard in real time, with expert guidance? Adventure Media is running Master Claude Code in One Day, a hands-on workshop that takes you from zero to a working AI automation system before the day is over. Seats are filling fast, and previous sessions have sold out weeks in advance.
Register Now, Spots Filling Fast →Before writing a single line of code, it helps to understand exactly what this dashboard does, why the architecture works, and why building it with a claude code tutorial approach outperforms manual monitoring methods. The dashboard pulls publicly available competitor ad data from multiple sources, normalizes it into a structured format, and renders a visual summary you can review in minutes each morning.
The system has four core components:
Why does this matter now? The advertising landscape is shifting faster than at any point in the last decade. OpenAI recently began testing ads in the US on its Free and Go tiers, contextual targeting formats are replacing keyword-based buying in several major channels, and AI-generated ad creative is compressing the production cycles that once gave larger advertisers their lead time advantage. Competitors are moving faster. Intelligence gaps that once lasted weeks now last days.
A system that updates automatically every 24 hours, flags new creative directions, and surfaces copy patterns your rivals are testing gives any marketing team a structural edge that no amount of manual browsing can replicate.
Before starting, confirm you have the following in place:
| Requirement | Minimum Version | Why It's Needed |
|---|---|---|
| Python | 3.10+ | Core runtime for all scripts |
| Claude Code (Anthropic) | Latest stable | AI coding assistant that generates and debugs all code |
| Node.js | 18+ | Required to install Claude Code via npm |
| Streamlit | 1.30+ | Dashboard rendering framework |
| SQLite3 | Included with Python | Local persistent data storage |
| Playwright or Requests-HTML | Latest | Web scraping and page interaction |
| Anthropic API Key | Active subscription | Powers Claude Code's code generation |
Estimated total setup time: 45–60 minutes for a first-time user. Estimated build time for the full dashboard: 3–4 hours following this guide.
The first step is getting Claude Code running on your local machine and establishing a clean project directory. Claude Code runs directly in your terminal, meaning it can read, write, and execute files in your working directory. This is what makes it so powerful for automation projects: it is not just answering questions, it is actively building the system alongside you.
Open your terminal and run the following command to install Claude Code globally via npm:
npm install -g @anthropic-ai/claude-code
Once installed, authenticate with your Anthropic API key:
claude
Claude Code will prompt you to authenticate via the Anthropic console. Follow the OAuth flow in your browser and return to the terminal. You'll see a confirmation message when authentication is successful.
For detailed installation documentation, refer to the official Claude Code overview on Anthropic's documentation site.
Create a dedicated project directory and navigate into it:
mkdir competitor-ad-dashboard
cd competitor-ad-dashboard
claude
This launches an interactive Claude Code session inside your project directory. Now give Claude Code its first instruction to scaffold the project:
Your prompt to Claude Code: "Create a Python project structure for a competitor ad intelligence dashboard. Include directories for /collectors, /processors, /database, and /dashboard. Create a requirements.txt with these packages: requests, playwright, beautifulsoup4, streamlit, pandas, plotly, schedule, python-dotenv. Also create a .env.example file with placeholders for META_AD_LIBRARY_TOKEN, COMPETITORS, and DB_PATH."
Claude Code will generate the full directory structure and files. Review the output, then run:
pip install -r requirements.txt
playwright install chromium
Common mistake to avoid: Many users skip creating a virtual environment and install dependencies globally. This causes version conflicts on projects that run long-term. Before installing, always run python -m venv venv and activate it with source venv/bin/activate (Mac/Linux) or venv\Scripts\activate (Windows).
Pro tip: Ask Claude Code to also generate a CLAUDE.md file in your project root. This file acts as persistent context for Claude Code, storing project conventions, variable names, and architectural decisions so every future session starts with the full picture. Prompt: "Create a CLAUDE.md file documenting this project's architecture, the purpose of each directory, and the naming conventions we'll use for database tables and Python modules."
The Meta Ad Library is the single richest source of publicly available competitor ad data. It exposes active ads from any Facebook or Instagram advertiser, including ad copy, creative format, start dates, and estimated audience reach. The claude code automation approach here is to have Claude Code write a collector that queries the Meta Ad Library API, parses the JSON response, and stores structured records in your local database.
Before writing code, you need a Meta developer access token. Navigate to the Meta Ad Library API page, log in with a Facebook account, and request access. Meta typically approves API access within 24–48 hours for accounts with verified identity. Once approved, generate an access token and add it to your .env file:
META_AD_LIBRARY_TOKEN=your_token_here
COMPETITORS=competitor_page_id_1,competitor_page_id_2,competitor_page_id_3
To find a competitor's Facebook Page ID, navigate to their page, click "About", and look for the Page ID in the page info section. Alternatively, use a free Page ID lookup tool.
With your project open in Claude Code, use this prompt:
Your prompt to Claude Code: "In the /collectors directory, create a file called meta_collector.py. This script should: (1) Read the META_AD_LIBRARY_TOKEN and COMPETITORS variables from the .env file. (2) For each competitor page ID, query the Meta Ad Library API endpoint at https://graph.facebook.com/v19.0/ads_archive with parameters: ad_reached_countries=['US'], ad_type='ALL', fields='id,ad_creation_time,ad_creative_bodies,ad_creative_link_captions,ad_creative_link_titles,page_name,spend,impressions'. (3) Handle pagination using the 'after' cursor in the API response. (4) Return a normalized list of dictionaries with keys: ad_id, page_name, ad_body, ad_title, ad_caption, start_date, collected_at. (5) Include proper error handling for rate limits and network failures with exponential backoff."
Claude Code will generate the full module. Read through it carefully before running. Pay particular attention to the pagination logic, as the Meta API uses cursor-based pagination and a common error is accidentally running infinite loops when the cursor isn't properly incremented.
Once Claude Code generates the file, ask it to also write a test:
Your prompt: "Write a quick test script called test_meta_collector.py in the project root that calls the meta_collector.py module with a single competitor page ID and prints the first 3 results to the console."
Run python test_meta_collector.py and verify you're receiving structured ad records. If the API returns an error, paste the full error message back into your Claude Code session. Claude Code excels at debugging, it can read the error, understand the context of your full codebase (since it's running in your project directory), and suggest the precise fix.
Warning: The Meta Ad Library API has rate limits. For projects monitoring more than five competitors, space your requests with a 2–3 second delay between calls to avoid hitting limits. Ask Claude Code to add time.sleep(2) calls between competitor requests if it hasn't already.
Google's Ads Transparency Center offers a searchable public database of ads running across Google Search, YouTube, and Display. Unlike the Meta API, Google does not offer a documented REST API for the Transparency Center, so this collector uses Playwright to automate browser interaction and extract data. This is where learn claude code techniques get particularly interesting: you're teaching Claude Code to write browser automation scripts without needing to know Playwright syntax yourself.
The Google Ads Transparency Center at adstransparency.google.com renders content dynamically via JavaScript, making simple HTTP requests ineffective. Playwright launches a headless Chromium browser, navigates to the page, waits for content to render, and extracts the data via DOM selectors.
Prompt Claude Code with this instruction:
Your prompt to Claude Code: "In the /collectors directory, create google_collector.py. Use Playwright's async API to: (1) Launch a headless Chromium browser. (2) Navigate to https://adstransparency.google.com/advertiser/ followed by the advertiser domain name. (3) Wait for the ad card elements to load (selector: look for article or card-type elements). (4) Extract ad headline text, description text, and the date range shown. (5) Take a screenshot of each ad card and save it to a /screenshots directory with the filename format: [advertiser]_[date]_[index].png. (6) Return a list of dictionaries with keys: advertiser, headline, description, date_range, screenshot_path, collected_at. Include a 3-second wait between page interactions to avoid detection."
Claude Code will generate a Playwright-based async scraper. One nuance to be aware of: Google frequently updates the DOM structure of the Transparency Center, which can break selectors. Ask Claude Code to use multiple fallback selectors and log a warning rather than throwing an error when a selector doesn't match.
Web scraping for competitor intelligence requires resilience. Selectors break. Pages time out. CAPTCHAs appear. Build this tolerance into the system from the start by prompting:
Your prompt: "Add a try/except wrapper around every Playwright interaction in google_collector.py. If an element isn't found within 10 seconds, log a warning with the advertiser name and continue to the next advertiser rather than crashing the script."
Pro tip: The screenshots Claude Code captures serve a dual purpose. They're the actual ad creative archive, and they also provide visual evidence when you want to show your team exactly what a competitor is running, without anyone needing to navigate to the Transparency Center manually. Over time, this screenshot library becomes a valuable creative research asset.
If you want to accelerate your mastery of automation techniques like this, reserve your spot at the Master Claude Code in One Day workshop where these exact patterns are taught live with real projects.
Raw data from multiple sources needs a consistent home. The database layer is the backbone of the entire system. Without it, you're just running one-off scripts. With it, you have a growing historical record that reveals trends, flags new ads, and powers the dashboard's comparative visualizations.
Navigate to the /database directory in your Claude Code session and prompt:
Your prompt to Claude Code: "In the /database directory, create a file called db_manager.py. Design a SQLite database with the following tables: (1) ads: columns for id (primary key), source (meta/google), competitor_name, ad_id, headline, body_text, cta_text, start_date, end_date, screenshot_path, first_seen, last_seen, is_active. (2) competitors: columns for id, name, meta_page_id, google_domain, industry, added_at. (3) ad_tags: columns for id, ad_id (foreign key to ads), tag_name, tag_value. Write functions for: init_db(), upsert_ad(), mark_inactive_ads(), get_ads_by_competitor(), get_new_ads_since(). Use context managers for all database connections."
The upsert_ad() function is critical. It should insert a new record if the ad_id doesn't exist, or update the last_seen timestamp if it does. This is what allows the system to distinguish between a brand-new ad and an ad that's been running for six weeks, which is itself a signal (a long-running ad is almost certainly profitable for the competitor).
Data from Meta and Google arrives in different formats. The normalization layer standardizes everything into the common schema before it hits the database. Prompt Claude Code:
Your prompt: "In the /processors directory, create normalizer.py. Write a function normalize_ad(raw_ad, source) that accepts a dictionary from either the meta_collector or google_collector and returns a dictionary matching the ads table schema. Handle missing fields gracefully by defaulting to None. Also write a function tag_ad(ad_dict) that analyzes the headline and body text and adds tags to the ad_tags table for: presence of price mentions (tag: 'has_price'), urgency words like 'limited', 'today', 'now' (tag: 'has_urgency'), question-format headlines (tag: 'question_headline'), and social proof mentions like 'trusted', 'rated', 'reviews' (tag: 'has_social_proof')."
The tagging system is where this dashboard starts delivering real intelligence beyond what any SaaS tool offers out of the box. When you can filter competitor ads by tag and see that a rival started running urgency-heavy copy two weeks ago, that's a competitive signal worth acting on.
Create a main orchestration script that ties the collectors and database together:
Your prompt: "Create a file called collect_and_store.py in the project root. This script should: (1) Call the meta_collector to fetch ads for all competitors in the .env file. (2) Call the google_collector for the same competitors. (3) Pass each result through normalizer.normalize_ad(). (4) Call normalizer.tag_ad() on each normalized record. (5) Use db_manager.upsert_ad() to store each result. (6) Print a summary at the end: X new ads found, Y existing ads updated, Z ads marked inactive. Add logging to a file called collection.log."
Test this script end-to-end before moving to the dashboard. A successful run should show structured records appearing in your SQLite database, which you can verify by opening the database file with a tool like DB Browser for SQLite, a free open-source database viewer.
With data flowing into the database, the final component is the visual dashboard that makes the intelligence accessible to everyone on the team, not just the person who built the system. Streamlit is the right choice here because it turns Python data manipulation code directly into interactive web applications with almost no front-end experience required.
Prompt Claude Code to generate the full dashboard file:
Your prompt to Claude Code: "In the /dashboard directory, create app.py. Build a Streamlit dashboard with the following sections: (1) Sidebar with competitor filter (multi-select), date range selector, and source filter (Meta/Google/All). (2) Top row: three metric cards showing Total Active Ads, New Ads This Week, and Competitors Tracked. (3) A bar chart using Plotly showing ads per competitor, colored by source. (4) A timeline chart showing new ad launches over the past 30 days by competitor. (5) A filterable data table showing all ads with columns: Competitor, Headline, Body Preview (first 80 chars), Tags, Source, First Seen. (6) A 'New Ads Alert' section at the top that highlights any ads first seen in the last 48 hours with a red border. Connect everything to the db_manager functions. Use st.cache_data with a 30-minute TTL on all database queries."
The st.cache_data decorator is important for performance. Without it, every user interaction triggers a full database query, which slows the interface noticeably as the dataset grows.
A summary dashboard is useful, but analysts need to read the actual ad copy. Add a detail view:
Your prompt: "Add a section to app.py called 'Ad Deep Dive'. When the user clicks on a row in the data table, show the full ad details in an expandable panel below the table: full headline, full body text, all tags, the screenshot image (if available), and a link to the source (Meta Ad Library URL or Google Transparency URL). Also add a text area below each ad that says 'Your Notes' and uses st.text_area with a unique key so team members can add observations that save to a notes column in the database."
Launch the dashboard from your terminal:
streamlit run dashboard/app.py
Streamlit opens the application in your default browser at localhost:8501. Share it with your team by running it on a shared server or deploying it to Streamlit Community Cloud, which offers free hosting for Python apps connected to a GitHub repository.
A dashboard that requires manual script runs is only slightly better than doing the research manually. The real value of claude code automation is that the system runs without you. The scheduler component turns this from a one-time project into a persistent intelligence operation.
Prompt Claude Code:
Your prompt to Claude Code: "Create a file called scheduler.py in the project root. Use the 'schedule' Python library to run collect_and_store.py every day at 7:00 AM local time. After each successful collection, send a summary email using Python's smtplib and the email.mime modules. The email should include the number of new ads found and a list of competitor names with new creative. Read email settings (SMTP_HOST, SMTP_PORT, SMTP_USER, SMTP_PASS, NOTIFY_EMAIL) from the .env file. Also add error alerting: if the collection script fails, send an error notification email immediately."
For production deployments on a server or cloud instance, a cron job is more reliable than the Python schedule library. Ask Claude Code to also generate a crontab entry:
Your prompt: "Generate a crontab entry that runs collect_and_store.py every day at 7:00 AM, logs output to /logs/cron.log, and appends rather than overwrites the log file."
Claude Code will output something like:
0 7 * * * /path/to/venv/bin/python /path/to/project/collect_and_store.py >> /path/to/logs/cron.log 2>&1
Add this to your crontab with crontab -e on Mac or Linux.
For teams that live in Slack, a morning message is more actionable than an email. Prompt Claude Code to add a Slack webhook notification:
Your prompt: "Add a Slack notification function to collect_and_store.py. After a successful collection run, send a Slack message to a webhook URL (stored in .env as SLACK_WEBHOOK_URL) with a formatted summary block showing: total new ads, a breakdown by competitor, and a direct link to the dashboard. Format the message using Slack's Block Kit format with a header block and a section block for each competitor that has new ads."
This single addition transforms the system from a passive database into an active intelligence feed that surfaces inside your team's existing workflow each morning.
The final layer is what separates this dashboard from any commercial tool. By calling the Anthropic API directly from within the system, you can have Claude analyze competitor ad copy patterns at scale and surface qualitative insights that no rule-based tagging system can match.
This is where the anthropic claude code ecosystem becomes genuinely powerful. You're using Claude Code to build a system that then calls Claude's API to analyze the data that system collects. The AI is both the builder and the analyst.
Your prompt to Claude Code: "In the /processors directory, create ai_analyzer.py. This module should use the Anthropic Python SDK to call claude-3-5-haiku-20241022 (the cost-effective model for batch processing). Write a function analyze_competitor_batch(competitor_name, ads_list) that sends a batch of up to 20 ads to Claude with this system prompt: 'You are a competitive intelligence analyst. Analyze this batch of competitor ads and identify: (1) The primary value propositions being emphasized. (2) The target audience signals in the copy. (3) Any notable shifts in messaging compared to generic industry norms. (4) The emotional triggers being used. Return your analysis as JSON with keys: primary_value_props (list), target_audience (string), messaging_shifts (list), emotional_triggers (list), overall_strategy_summary (string).' Store the analysis result in a new database table called competitor_insights with columns: competitor_name, analysis_date, insights_json."
Run this analysis weekly rather than daily to manage API costs. The insights it generates are qualitative, strategic-level observations that would take a skilled analyst hours to produce manually.
Add the insights to the Streamlit app:
Your prompt: "Add a new tab to the Streamlit dashboard called 'AI Insights'. For each competitor, display the most recent analysis from the competitor_insights table. Show the primary value props as tags/badges, the target audience as a highlighted text block, and the overall strategy summary as a card. Add a 'Run New Analysis' button that triggers ai_analyzer.analyze_competitor_batch() for the selected competitor and refreshes the display."
This tab is typically the first thing marketing directors open each week. It translates raw ad data into the strategic narrative that drives creative and positioning decisions.
A system that works once in testing and fails silently in production is worse than no system at all. This step focuses on making the dashboard reliable enough to run without supervision. This is also where the ai coding assistant capabilities of Claude Code prove most valuable, because debugging is where most self-taught developers lose momentum.
Prompt Claude Code to generate a comprehensive test suite:
Your prompt to Claude Code: "Create a /tests directory with a file called test_pipeline.py. Write pytest tests that: (1) Test the normalizer with mock data from both Meta and Google formats. (2) Test the database upsert function to verify it correctly updates existing records. (3) Test the tag_ad function to verify all five tag types are correctly identified. (4) Create a test for the full pipeline using mock API responses (use unittest.mock to patch the actual API calls). Run the tests with 'pytest tests/ -v' and ensure all pass before any deployment."
| Failure Point | Symptom | Fix Prompt for Claude Code |
|---|---|---|
| Meta API rate limit | Script stops after 10–15 competitors | "Add exponential backoff with jitter to meta_collector.py for 429 responses" |
| Playwright selector change | Google collector returns empty results | "Update google_collector.py selectors and add a fallback selector array" |
| SQLite lock error | Database write fails when dashboard is open | "Add WAL mode to SQLite connection in db_manager.py" |
| Streamlit memory growth | Dashboard slows after hours of use | "Reduce st.cache_data TTL and add pagination to data table query" |
| Anthropic API timeout | AI analysis fails for large batches | "Reduce batch size to 10 ads per API call and add retry logic" |
The key debugging workflow with Claude Code is simple: when something breaks, paste the full error traceback directly into your Claude Code session without any editing. Claude Code reads the context of your full project alongside the error and typically identifies the root cause immediately. This workflow is dramatically faster than searching Stack Overflow or reading documentation.
For teams running this on a server, a health check confirms the system is collecting data as expected:
Your prompt: "Create a health_check.py script that queries the database and returns a status report: last collection time, total ads stored, ads collected in the last 24 hours, and whether any collectors returned errors in the last run. Format the output as JSON and also print a human-readable summary."
BUILD THIS LIVE WITH EXPERT GUIDANCE
Following written tutorials gets you 70% of the way there. The remaining 30%, the debugging, the configuration decisions, the production-hardening choices, is where most people get stuck. The Master Claude Code in One Day workshop by Adventure Media covers every step of this build in a live, hands-on session with direct Q&A access to practitioners who've built these systems for real clients.
Adventure Media pioneered ChatGPT Ads management and now teaches businesses how to build AI automation systems that create lasting competitive advantage. Previous sessions sold out. Seats are limited.
Reserve Your Seat Now →The dashboard is operational. Data is flowing. Now the question shifts from "how do I build this?" to "how do I use it to win?". The system's value depends entirely on having a structured process for turning observations into decisions. Here is the framework that practitioners use when working with live competitor intelligence data.
Schedule 30 minutes every Monday morning to review the dashboard with your marketing team. Structure the review around four questions:
The most effective use of competitor intelligence is not copying ads. It's identifying the angles competitors are not covering and building creative that fills those gaps. When the AI analysis shows three competitors all emphasizing "fast results", that's a signal to test messaging around "lasting results" or "sustainable transformation". The gap in the market is often the most profitable positioning.
Export the data table filtered by competitor and the last 30 days, then paste it into a Claude.ai chat session (not Claude Code) with the prompt: "Analyze these competitor ads and identify the messaging angles that are NOT being covered. Suggest three positioning angles that would differentiate a new entrant in this space." This meta-use of Claude closes the loop between data collection and creative strategy.
The foundation built in this guide handles the core use case. As the system matures and the team builds confidence with it, several extensions add significant value.
| Extension | Complexity | Value | Claude Code Prompt Difficulty |
|---|---|---|---|
| Landing page change tracker | Medium | ✅ High | Intermediate |
| LinkedIn Ad Library collector | Medium | ✅ High for B2B | Intermediate |
| Sentiment trend analysis | Low | ⚠️ Medium | Beginner |
| TikTok Creative Center scraper | High | ✅ High for D2C | Advanced |
| Automated creative brief generator | Low | ✅ Very High | Beginner |
| ChatGPT Ads tracker (emerging) | High | ✅ Rapidly growing | Advanced |
The ChatGPT Ads tracker deserves special mention. With OpenAI now actively testing ads on its Free and Go tiers in the US, tracking which advertisers appear in ChatGPT conversations, what formats they're using, and what conversational contexts trigger ads is the next frontier of competitive intelligence. The architecture built in this guide can be extended to capture this data as the platform matures.
Collecting data from publicly available sources like the Meta Ad Library and Google Ads Transparency Center is generally legal, as these platforms are explicitly designed for public transparency. Always review each platform's Terms of Service before building a collector. The Meta Ad Library API is officially documented and supported. The Google Transparency Center data is publicly accessible. Neither source requires authentication to view, and both exist specifically to enable public scrutiny of advertising practices. That said, this is not legal advice, and consulting with a lawyer familiar with data privacy law is recommended before commercial deployment.
The primary costs are Anthropic API usage (for the AI analysis module) and hosting. The API calls for weekly batch analysis of 20 competitors with 20 ads each typically cost less than $5 per month using the Haiku model. Hosting on a basic cloud instance (AWS t3.micro or equivalent) runs approximately $8–15 per month. The Meta Ad Library API is free. Total monthly cost for a 10–20 competitor operation is typically under $25.
Basic Python familiarity helps, but Claude Code dramatically reduces the barrier. The most important skill is writing clear, specific prompts. Claude Code handles syntax. What you need to provide is the "what" and "why" of each component. If a generated file confuses you, ask Claude Code to explain it line by line.
The SQLite database and Python architecture scales comfortably to 50–100 competitors. Beyond that, consider migrating to PostgreSQL (ask Claude Code to update the db_manager.py to use psycopg2 instead of sqlite3 for the connection layer). The collectors themselves are limited by API rate limits, not the database.
Meta long-lived tokens expire after 60 days. Add a token expiry check to the health_check.py script that alerts you when the token has less than 7 days remaining. Refresh tokens via the Meta for Developers console before expiry to avoid gaps in data collection.
The Google Ads Transparency Center shows ads across Search, Display, and YouTube. For Search-specific ad copy intelligence, tools like SpyFu and iSpionage offer API access that can be integrated as additional collectors using the same architecture. Prompt Claude Code to add a new collector module following the same interface as meta_collector.py.
The Google Transparency Center occasionally presents CAPTCHAs to automated browsers. Adding realistic browser headers, randomizing request timing (between 3–8 seconds per interaction), and using Playwright's stealth mode reduces CAPTCHA frequency significantly. If CAPTCHAs persist, rotating through residential proxy services is the production-grade solution. Ask Claude Code: "Update google_collector.py to use playwright-stealth and add randomized delays between 3 and 8 seconds."
Not directly, but it can be extended. Sign up for competitor email lists manually and forward them to a dedicated inbox. Ask Claude Code to add an email collector module using Python's imaplib to read that inbox, extract marketing emails, and store them using the same normalization pipeline as the ad collectors. This creates a multi-channel intelligence system covering paid ads and email simultaneously.
Deploy to Streamlit Community Cloud by pushing your project to a public or private GitHub repository and connecting it via the Streamlit Cloud interface. This generates a shareable URL that works in any browser with no local installation required. For sensitive competitive data, use Streamlit's authentication features or deploy behind a VPN.
For batch processing of ad copy, claude-3-5-haiku-20241022 offers the best cost-to-quality ratio. For deeper strategic analysis where you're processing a full month of competitor data and want nuanced insights, claude-3-5-sonnet-20241022 produces significantly richer output. Consider a tiered approach: run Haiku daily for tagging and basic classification, and Sonnet weekly for strategic synthesis.
Most marketers with basic terminal familiarity complete the full build in 6–10 hours over 2–3 sessions. The Meta API approval process (24–48 hours) is typically the longest single delay. The actual coding, following this guide with Claude Code handling generation, usually takes 3–4 focused hours.
Yes. The Google Ads Transparency Center includes YouTube ads. For richer YouTube intelligence including video thumbnails and view counts, the YouTube Data API v3 offers additional data points. Ask Claude Code to create a youtube_collector.py that queries the YouTube Data API for competitor channel uploads and cross-references them with the Transparency Center data to identify which videos are being promoted as ads.
Join our Claude Code events
Learn more →
We'll get back to you within a day to schedule a quick strategy call. We can also communicate over email if that's easier for you.
New York
1074 Broadway
Woodmere, NY
Philadelphia
1429 Walnut Street
Philadelphia, PA
Florida
433 Plaza Real
Boca Raton, FL
info@adventureppc.com
(516) 218-3722
Over 300,000 marketers from around the world have leveled up their skillset with AdVenture premium and free resources. Whether you're a CMO or a new student of digital marketing, there's something here for you.
Named one of the most important advertising books of all time.
buy on amazon


Over ten hours of lectures and workshops from our DOLAH Conference, themed: "Marketing Solutions for the AI Revolution"
check out dolah
Resources, guides, and courses for digital marketers, CMOs, and students. Brought to you by the agency chosen by Google to train Google's top Premier Partner Agencies.
Over 100 hours of video training and 60+ downloadable resources
view bundles →