Click fraud, in the context of cost-per-click (CPC) advertising, is the generation of clicks on a paid advertisement that do not represent genuine interest from a human user within the advertiser's target audience. In general digital advertising, click fraud typically involves automated bots, click farms, or competitor sabotage. In recruitment advertising specifically, the taxonomy of invalid traffic is more nuanced and encompasses several distinct categories that carry different implications for employer budget protection.
The term invalid click is more technically precise than fraudulent click — not all invalid clicks are deliberately fraudulent. A duplicate click from a genuine job seeker who bookmarks a tracking URL and returns to it the following day is not fraudulent in intent, but it would represent a double-charge if counted. A click from a web crawler indexing job pages is not malicious, but it consumes click budget without any possibility of resulting in an application. The fraud protection challenge is therefore better framed as invalid traffic detection than fraud detection alone — the goal is to ensure that only clicks representing a genuine, unique, human job seeker within the target geography and within an active campaign are charged to the employer's account.
This reframing matters because it changes the design requirements for a validation system. A pure fraud-detection system optimises for catching deliberate bad actors. A comprehensive invalid traffic detection system must also catch accidental duplicates, geographic mismatches, crawler traffic, and expired campaign clicks — categories that may individually represent small percentages but collectively account for a substantial fraction of raw click volume in multi-network programmatic campaigns.
"The distinction between fraudulent intent and invalid traffic is not merely semantic. It determines the scope of the detection architecture required — and the honest acknowledgment of what any detection system can and cannot catch."
The Association of National Advertisers (ANA) estimated in its 2023 programmatic media supply chain transparency study that between 14% and 22% of programmatic ad impressions and clicks globally are invalid — representing traffic that serves no legitimate advertising purpose. Juniper Research's annual ad fraud forecasts have consistently estimated total global losses from click fraud at $80–100 billion annually across all digital advertising categories.
In the recruitment-specific context, the Recruitment and Employment Confederation (REC) and various academic studies have identified click fraud as a growing concern as the industry has shifted toward programmatic CPC models. The financial impact is asymmetric: for large employers with substantial programmatic budgets, even a 10% invalid traffic rate on a $50,000 annual recruitment advertising spend represents $5,000 in wasted expenditure. For smaller employers with tighter budgets, a 20% invalid traffic rate on a $500 campaign represents a meaningful erosion of their hiring investment.
What makes the recruitment sector somewhat distinctive from general digital advertising is the source diversity of invalid traffic in multi-network campaigns. Because programmatic job advertising distributes across multiple partner networks simultaneously — each with their own publisher ecosystems and traffic quality standards — the aggregate invalid traffic rate for a multi-network campaign can exceed the rate on any individual platform. A partner network with a 5% invalid traffic rate and a second partner with an 8% rate do not produce a 6.5% blended rate when combined — the geographic mismatch and deduplication challenges introduced by serving across multiple international networks can produce rates meaningfully above either individual network's figure.
Understanding the specific sources of invalid traffic in recruitment advertising is prerequisite to evaluating any fraud protection system. The principal categories are:
Automated crawlers and scrapers: Web crawlers — including search engine bots, data aggregation services, and competitive intelligence tools — routinely follow links across job board pages. These clicks appear in server logs, may trigger tracking pixels, and in unprotected systems would be counted as valid clicks. Crawlers typically announce themselves through their User-Agent string, though some commercial scrapers use browser-emulating user agents to evade detection.
Publisher-side traffic inflation: In multi-network programmatic campaigns, publishers (job boards and aggregators within the partner network) are compensated based on click volume. This creates a financial incentive for some publishers to generate artificial traffic — either through automated systems or through incentivised "click mills." This type of fraud is particularly difficult to detect because it may originate from real IP addresses and use legitimate browser user agents.
Geographic mismatch: A campaign targeting United Kingdom employers may receive clicks from job seekers in countries with no realistic prospect of applying — due to either publisher network routing that does not enforce geographic boundaries, or deliberate gaming of geographic targeting systems. These clicks consume the employer's budget without any possibility of producing an application from the target candidate pool.
Duplicate clicks: A genuine job seeker may click the same tracking URL multiple times — by returning to a bookmarked page, by clicking from multiple devices, or by navigating back from the job page and clicking again. These are not fraudulent in intent but represent double-billing if counted.
Expired campaign clicks: Partner networks ingesting programmatic job feeds do not always update their listings in real time when a campaign expires or is cancelled. A job seeker may click a sponsored listing that continues to appear in a partner network's results after the campaign has ended, resulting in a click on an expired order.
Internal and infrastructure traffic: Clicks originating from private IP ranges (including the employer's own corporate network), monitoring systems, load testers, and infrastructure services may be logged as clicks if the tracking system does not filter private address ranges.
Tor and VPN anonymisation: Some users deliberately obscure their geographic origin using Tor exit nodes or commercial VPN services. While not inherently fraudulent, anonymised traffic cannot be reliably attributed to the campaign's target geography and may indicate attempts to evade deduplication measures.
Expertini's click validation system is built around a fundamental design principle: every click follows a single, unbypassable path through the validator before it can be counted. This is achieved by embedding a platform-controlled tracking URL in every job listing across every partner network's feed. Rather than trusting partner networks to report click counts — which creates a principal-agent problem where the reporting entity has a financial interest in the count — Expertini's system is the sole counter of record. All traffic, regardless of source network, is routed through:
https://[country_code].expertini.com/track/click/[utm_key]/[job_slug]/The second design principle is defence in depth: multiple independent validation layers, each catching a distinct category of invalid traffic, applied sequentially. A click that evades one layer may be caught by a subsequent one. The layers are ordered so that the cheapest checks (those requiring no database lookups) run first, and the most expensive checks (Elasticsearch queries) run only after cheaper checks have passed. This ordering optimises for server performance while maintaining comprehensive validation.
The third principle is redirect-not-reject: invalid clicks are not served an error page or blocked at the browser level. Instead, they are silently redirected to the job listing page — the same destination a valid click would reach. This design choice has two benefits: it prevents fraud detection from degrading the candidate experience (a genuine job seeker who is temporarily misclassified as invalid still reaches the job page), and it prevents bad actors from using error responses to probe the validation system's boundaries.
The fourth principle is audit-without-charge: certain categories of invalid traffic — particularly foreign-country clicks and inactive order clicks — are logged with full attribution detail (IP address, originating country, referrer, timestamp) but never charged. This audit trail creates accountability with partner networks and enables post-campaign analysis of traffic quality by source.
The first check examines the HTTP Referer header to identify which domain originated the click. A defined allowlist of legitimate referrer domains — including major partner networks and Expertini's own domain — is evaluated against the incoming referrer. Clicks from domains outside this allowlist generate a warning log entry with the full referrer URL truncated for storage efficiency. Importantly, this layer is advisory only — it never blocks a click. The architectural reasoning is sound: blocking on referrer alone would create false positives, because HTTP referrers are frequently absent due to HTTPS-to-HTTPS stripping policies, ad blockers, and privacy browser settings. A genuine job seeker clicking from a legitimate network may arrive with an empty or unexpected referrer. Logging rather than blocking at this layer preserves the audit trail without introducing false rejection risk.
The UTM key is a 12-character lowercase hexadecimal string generated at order creation time using a cryptographically random process. Layer 1 validates that the utm_key extracted from the URL path conforms exactly to this specification: it must be present, exactly 12 characters in length, and consist exclusively of characters in the hexadecimal alphabet (0–9, a–f). Any deviation — a truncated key, a key containing non-hex characters, or an absent key — causes an immediate redirect without any database query being executed. This layer is computationally negligible (a simple string operation) and acts as a first-line filter against malformed requests: URL scanners, security research tools, and casual probers that construct arbitrary URLs matching the route pattern but not the key format are terminated here before consuming any database resources. The 12-character hex format provides a keyspace of approximately 281 trillion possible values, making brute-force guessing of valid keys computationally impractical.
With a syntactically valid UTM key confirmed, the system queries the Elasticsearch job distribution orders index for an order matching that key. The query uses a match_phrase filter on the utm_key field — an exact-match search that returns only orders whose UTM key is an exact string match. If no matching order is found, the click is redirected immediately: the UTM key does not correspond to any known campaign in this country's index. If an order is found, its status is retrieved and checked case-insensitively in application code (handling both active and Active storage conventions). An order with any status other than active — including completed, expired, cancelled, paused, and pending — triggers a redirect without any write operation to the database. The design decision to perform no write on inactive order clicks is deliberate: it prevents spurious data modification on orders whose lifecycle has concluded, and avoids race conditions that could corrupt click counters on orders in transition states. The job slug is also validated at this layer — the slug must appear in the order's job list, confirming that the click corresponds to a job that is actually part of the campaign.
Layer 3 contains three sequential sub-checks that enforce the campaign's temporal and financial boundaries. The first sub-check compares the current UTC timestamp against the campaign's expiry_date: if the campaign has expired, the order status is updated to expired in Elasticsearch and the job's feed entry is deactivated, before redirecting the click without charge. The second sub-check verifies that clicks_remaining is greater than zero: if the click budget is exhausted, the order is marked completed and deactivated. The third sub-check computes current budget consumption (clicks delivered × rate per click) and compares it against the employer's total budget cap: if the cap has been reached, the campaign is also marked completed. Critically, all three conditions result in the employer being charged nothing — clicks that arrive after expiry or budget exhaustion are redirected without decrementing any counter. This is not merely a consumer protection feature but a technical necessity: without this layer, a delay in feed propagation to partner networks (which can lag up to 15 minutes after a campaign expires) would result in clicks arriving after the campaign closes. Layer 3 ensures that this propagation window never costs the employer anything.
Layer 4 applies two complementary User-Agent checks. The first is a blocklist check: the User-Agent string is matched against a maintained list of known bot, crawler, and automated tool signatures covering web crawlers, headless browsers, testing frameworks, programmatic HTTP clients, and social media link preview generators. The second is a positive allowlist check: after passing the blocklist, the User-Agent must contain at least one string indicating a real browser — specifically one of the major browser families. Clicks with no User-Agent string, or with a User-Agent that passes the blocklist but fails the browser allowlist, are redirected without charge. The two-stage approach addresses a specific evasion vector: sophisticated scrapers often avoid known bot signatures but may still fail to include a complete browser User-Agent. The blocklist and allowlist together close this gap. It is worth noting that this layer's effectiveness depends on the accuracy of User-Agent strings, which can be spoofed. This limitation is explicitly acknowledged in Section 10.
Action: Bot signature or no browser UA → redirect, no charge
Layer 5 extracts the visitor's IP address from the request, using a priority hierarchy that accounts for Cloudflare's proxy infrastructure: the CF-Connecting-IP header (which Cloudflare populates with the true client IP) is checked first, followed by the leftmost value of X-Forwarded-For (which may carry the original client IP in non-Cloudflare-proxied requests), and finally the REMOTE_ADDR from the Flask request context as a fallback. The extracted IP is then validated against a list of private and reserved IPv4 and IPv6 address ranges: loopback (127.x.x.x), unspecified (0.x.x.x), RFC 1918 private networks (10.x.x.x, 172.16–31.x.x, 192.168.x.x), IPv6 loopback (::1), and the literal string localhost. Clicks arriving from private address ranges — which would indicate traffic originating from within Expertini's own infrastructure, from the employer's corporate network, or from a localhost test environment — are redirected without charge. An absent or unextractable IP produces the same result.
Layer 6 is the deduplication mechanism that prevents the same visitor from being charged more than once per job per 24-hour period. A deduplication key is constructed by concatenating three identifiers — the visitor's IP address, the campaign's UTM key, and the job slug — and computing the SHA-256 hash of this concatenated string, retaining the first 32 hexadecimal characters. This hash is used as the document ID in a dedicated deduplication Elasticsearch index. The system attempts to retrieve an existing document with this ID: if found, the created_date of the existing record is compared against a rolling 24-hour cutoff. If the existing record falls within the 24-hour window, the current click is treated as a duplicate and redirected without charge — the employer is not billed, but the candidate still reaches the job page seamlessly. If the existing record has aged beyond 24 hours, it is overwritten with a fresh timestamp and the click proceeds. If no existing record is found, a new deduplication record is created. The SHA-256 construction means that the deduplication system never stores the raw IP address as an index key — the hash is a one-way function, providing a degree of privacy protection for the visitor's IP while maintaining deduplication efficacy. Raw IP addresses are, however, stored within the deduplication document body for audit purposes.
Cloudflare's infrastructure assigns a two-letter country code to every request via the CF-IPCountry header. For traffic originating from Tor exit nodes, Cloudflare uses the special code T1. For traffic where the country cannot be determined — which may indicate sophisticated VPN usage, newly allocated IP ranges, or infrastructure IP addresses — Cloudflare uses XX. Layer 6b checks for these two values and generates a warning log entry including the anomalous country code, the visitor IP, and the referrer. Unlike most other layers, Layer 6b does not currently block clicks on this basis — it logs and warns. The reasoning is that Tor and VPN usage, while potentially indicative of anonymisation intent, may also represent legitimate job seekers in countries with internet censorship or corporate network policies that route all traffic through VPN. Blocking on Tor/VPN alone could deny the platform to genuine candidates who happen to use privacy tools. The warning log, however, serves as an input to partner network accountability discussions: unusually high rates of T1/XX traffic from a specific partner source may indicate traffic quality issues with that publisher.
The final validation layer compares the visitor's country of origin — determined via Cloudflare's CF-IPCountry header — against the campaign's target country_code. A click is classified as foreign if three conditions are simultaneously true: the order's country code is not empty, not the test/global value AC (Ascension Island, used for testing), and not the universal wildcard ALL; the CF country code is not T1 or XX (which are handled separately by L6b); and the CF country code does not match the order's target country code. When all three conditions hold, the click is classified as a foreign-country invalid click. The system writes a detailed audit record to the deduplication index — including the visitor's IP, originating country, target country, referrer URL, UTM key, and timestamp — tagged with type: "foreign" to distinguish it from standard deduplication records. Additionally, a counter field foreign_clicks is incremented on the order document in Elasticsearch, enabling employers to see the total foreign click count in their campaign dashboard and PDF report. The click is then redirected to the job page without decrementing clicks_remaining or incrementing clicks_delivered — the employer is never charged. This layer is bypassed for campaigns where the country code is AC or ALL, reflecting that those campaign types either are global in scope or are being used for integration testing.
This layer is architecturally positioned between L2 (order lookup) and L3 (expiry/budget), which is deliberate. Once an order document has been retrieved from Elasticsearch and its status confirmed as non-active — covering completed, expired, cancelled, paused, and pending — the click is immediately redirected to the job page without performing any further database operations. Specifically: no expiry date comparison, no budget cap calculation, no deduplication write, no counter update. This design eliminates unnecessary database writes on orders that have concluded their lifecycle. It is particularly important for protecting against a timing vulnerability: in the interval between a campaign expiring or being cancelled and the partner network ingesting the updated job feed (up to 15 minutes), partner networks may continue to serve the sponsored listing. Clicks arriving during this window hit L7b and are silently redirected without charge. The candidate reaches the job page; the employer is billed nothing; and no data is modified on a concluded order document.
The following diagram illustrates the complete decision path from an incoming click to its final outcome. Every click enters at L0 and exits at one of two outcomes: a clean redirect (no charge, candidate reaches job page) or a valid counted click (charge applied, counter decremented, candidate reaches job page).
The foreign click problem deserves particular attention because it is both common in multi-network programmatic campaigns and commercially significant. When an employer in Spain purchases a programmatic campaign targeting Spain-based job seekers, they are paying for candidates who are realistically able to apply and take up a role in their target location. A click from a job seeker in a country with entirely different labour market conditions, salary expectations, and visa eligibility represents not just wasted spend — it potentially distorts the employer's understanding of campaign performance if it appears in click metrics as a valid delivery.
The foreign click problem arises in multi-network campaigns because partner networks distribute across many geographies simultaneously. A network serving primarily European job boards may also serve some traffic from North African or Middle Eastern audiences. The network's own geographic filtering may not be fine-grained enough to precisely honour a campaign's country-level targeting — particularly for niche or emerging markets where publisher geographic tagging is imprecise.
Expertini's Layer 7a addresses this through Cloudflare's IP geolocation infrastructure. Cloudflare operates one of the world's largest IP geolocation databases, updated continuously from routing and registration data. The CF-IPCountry header is populated by Cloudflare's edge network before the request reaches Expertini's application servers — meaning the geolocation lookup adds no latency to the click handling path.
When a foreign click is detected, three things happen simultaneously. First, the click is redirected to the job page — the candidate's experience is unaffected. Second, a detailed audit document is written to the deduplication index with type: "foreign", capturing all attribution data needed to raise a quality issue with the delivering partner network. Third, the foreign_clicks counter on the order document is incremented, so the employer can see — in their campaign dashboard and downloadable PDF performance report — exactly how many foreign-country clicks were intercepted across their campaign lifetime.
foreign_clicks as a separate metric from clicks_delivered. Foreign clicks do not appear in the delivered count, do not decrement the remaining click budget, and are not charged. They are presented as a transparency metric — evidence that the fraud protection system actively intercepted out-of-geography traffic.
A clear taxonomy of what Expertini's system guarantees will never be charged to an employer's account is useful both as a consumer protection reference and as a framework for comparing platforms. The following categories of clicks are definitively excluded from billing:
Syntactically invalid clicks (L1): Requests to the tracking endpoint with malformed UTM keys — wrong length, invalid characters, or absent key — are never recorded as clicks.
Unmatched campaign clicks (L2): Clicks arriving with a UTM key that does not correspond to any order in the current country's index, or to an order that does not include the clicked job slug, are never charged.
Inactive order clicks (L7b): Clicks arriving on campaigns in any status other than active — including paused, completed, expired, cancelled, and pending — are never charged and produce no database writes.
Post-expiry clicks (L3 sub-check 1): Clicks arriving after the campaign's 90-day expiry date are never charged, regardless of whether the partner network's feed has yet updated to remove the listing.
Over-budget clicks (L3 sub-checks 2 and 3): Clicks arriving after the campaign's click count has been exhausted, or after the total budget cap has been reached, are never charged.
Bot and crawler clicks (L4): Clicks from automated systems identifiable through User-Agent analysis are never charged.
Private-network clicks (L5): Clicks from internal infrastructure, corporate networks, and localhost environments are never charged.
Duplicate clicks (L6): A second click from the same IP address on the same job within any 24-hour period is never charged.
Foreign-country clicks (L7a): Clicks from IP addresses geolocated by Cloudflare to countries other than the campaign's target country are never charged and are logged as foreign traffic with full attribution data.
Fraud protection approaches vary significantly across the programmatic recruitment advertising industry. The comparison below is based on publicly documented platform capabilities and should be read as an approximate reference rather than a definitive audit — the internal implementation details of competitor systems are not publicly verifiable.
| Platform | Fraud Detection Model | Geographic Fraud Protection | 24h Deduplication | Expired Click Protection | Transparency to Employer | Refund for Invalid Clicks |
|---|---|---|---|---|---|---|
| Expertini | 7 independent layers, own click validator, all traffic routed through single endpoint | ✓ L7a country-level via Cloudflare CF-IPCountry | ✓ SHA-256 hash, 24h TTL | ✓ L3 + L7b — never charged, no DB write | ✓ Dashboard + PDF report showing foreign_clicks separately | ✓ Auto Stripe refund for undelivered clicks |
| Appcast | IVT (Invalid Traffic) filtering via internal and third-party signals; GIVT + SIVT standards | ⚬ Publisher-level geographic filtering (not click-level geolocation) | ⚬ Not publicly documented at session level | ⚬ Feed-propagation dependent; credits issued | ⚬ Aggregate IVT reporting; not per-click foreign attribution | ⚬ Credit-based; not automatic Stripe refund |
| Joveo | Real-time IVT filtering; partnerships with DoubleVerify and similar | ⚬ Geographic targeting controls; click-level geo validation not publicly documented | ⚬ Session-level dedup referenced but not technically specified publicly | ✓ Near-real-time API updates reduce expired click window | ⚬ Campaign analytics; specific IVT breakdown not employer-visible | ⚬ Account credit; not automatic per-campaign refund |
| Indeed Sponsored | Proprietary click quality systems; budget cap enforcement; CPA option eliminates click fraud risk | ✓ Strong geographic targeting enforcement on own network | ✓ Documented; same user / same job filtering | ✓ Real-time budget management; near-instant pause on exhaustion | ⚬ Aggregate reporting; invalid click breakdown not per-click visible | ✓ Budget cap auto-pause; refunds for clearly invalid charges |
| LinkedIn Job Ads | Internal click quality systems; LinkedIn's professional verification reduces bot risk | ✓ Strong: LinkedIn profiles carry verified location data | ✓ Frequency capping per member per campaign | ✓ Daily budget cap prevents expired-order overspend | ⚬ Campaign manager analytics; no foreign click breakdown | ⚬ Daily budget cap; click-level refund policy not published |
| JobAdX | Basic IVT filtering; real-time exchange model | ✗ Limited geographic validation documented | ⚬ Not publicly documented | ⚬ Feed-based; propagation delay dependent | ✗ Limited employer-facing fraud transparency | ⚬ Account credit; case-by-case |
⚬ = Partially documented or not publicly verifiable. ✗ = Not documented. ✓ = Documented capability. This table reflects publicly available information as of 2025–2026 and should not be taken as a definitive technical audit of competitor systems.
Intellectual honesty requires acknowledging what the validation architecture cannot reliably prevent. The following limitations are documented not to undermine confidence in the system, but because employers making investment decisions deserve an accurate picture of what multi-layer validation achieves and where residual risk remains.
The most sophisticated form of click fraud uses residential proxy networks — large pools of real residential IP addresses, typically recruited through compromised IoT devices or incentivised VPN apps — to generate clicks that appear to originate from genuine human users. These clicks arrive with real public IPs, legitimate browser User-Agents, and valid geographic attributions. Layers L4 and L5 cannot reliably detect this category of fraud. IP reputation scoring services (offered by DoubleVerify, PerformLine, and similar) can reduce exposure to known proxy networks, but no system achieves complete coverage given the scale and dynamism of residential proxy networks. Expertini does not currently integrate a third-party IP reputation scoring service, which represents a meaningful gap relative to larger enterprise programmatic platforms like Appcast and Joveo that incorporate such services.
Layer L4's bot detection relies on User-Agent strings, which can be set to any arbitrary value by any HTTP client. A scraper configured to present a Chrome browser User-Agent will pass L4's allowlist check despite being automated. The effectiveness of UA-based bot detection degrades as the sophistication of the bot increases. Headless browser frameworks that render JavaScript and present full browser User-Agent strings (Puppeteer, Playwright, Selenium in stealth mode) may evade both the blocklist and the allowlist checks. Behavioural signals — mouse movement, scroll patterns, time-on-page — would provide more reliable human verification but are architecturally infeasible at the redirect layer where the click is evaluated.
The 24-hour deduplication mechanism uses IP address as one component of the deduplication hash. Two edge cases reduce its effectiveness. First, IPv6 addresses can be rapidly rotated by ISPs and operating systems under privacy extensions (RFC 4941), meaning a single device may present a different IPv6 address on each connection — defeating IP-based deduplication across multiple clicks from the same physical user. Second, Carrier-Grade NAT (CGNAT) causes many users on mobile networks to share a single public IPv4 address. In CGNAT environments, the deduplication system may over-block — treating different genuine users sharing a NAT address as duplicates — or under-block when the NAT pool rotates addresses between clicks from the same device.
Cloudflare's IP geolocation is among the most accurate commercially available, but country-level accuracy is not 100%. IP address allocation does not always reflect where a user is physically located — corporate VPNs route traffic through data centres in countries other than the user's actual location; satellite internet services (notably Starlink) may geolocate to the satellite's ground station country rather than the user's physical location; and IP address blocks allocated to multinational companies may be registered in one country while serving users in another. The Layer 7a foreign click check is therefore better characterised as a significant risk reduction measure than a guarantee of perfect geographic precision.
If a partner network deliberately routes low-volume fraudulent traffic — below the threshold that would produce statistically anomalous patterns in the deduplication or geolocation logs — detection through click-level validation alone is challenging. Monitoring for anomalous click rates per partner network, elevated foreign click fractions from specific sources, and unusual time-of-day click distributions provides additional signals, but systematic analysis of partner traffic quality at the network level is not currently part of Expertini's automated fraud monitoring. Manual review of partner-level traffic statistics is conducted, but this is not as robust as the automated statistical anomaly detection offered by specialist fraud monitoring services.
Employers evaluating programmatic job advertising platforms — including Expertini's tool at /employer/programmatic-job-advertising-outreach/ — should apply the following framework when assessing fraud protection claims:
Ask who controls the click counter. If the platform relies on partner networks to self-report click counts, there is an inherent conflict of interest. Platforms that route all clicks through their own validated tracking endpoint — as Expertini does — provide a stronger protection structure because the counting entity has no financial interest in inflating the count.
Verify that foreign clicks are excluded from billing. This is not universal across platforms. Some CPC platforms charge all clicks regardless of geographic origin. Confirm in writing that clicks from outside the campaign's target country are neither counted nor charged.
Confirm the refund policy for undelivered clicks. A campaign that expires or is cancelled before its click budget is consumed should produce an automatic, prompt refund for the unused portion. Account credits that expire or require negotiation are meaningfully different from an automatic return to the original payment method.
Request transparency on invalid traffic metrics. A platform that can show you — per campaign, per partner network — how many clicks were intercepted at each validation layer is providing genuinely useful data. Aggregate IVT percentages without network-level or layer-level attribution have limited diagnostic value.
Evaluate click-to-application rate as a quality signal. After fraud filtering, the most meaningful measure of traffic quality is what fraction of paid clicks result in completed job applications. A platform achieving consistent 8–12% click-to-application rates is delivering meaningfully better traffic quality than one achieving 2–3%, regardless of how sophisticated its fraud detection claims to be.
For a broader examination of how programmatic advertising works within the recruitment technology landscape, refer to the companion research article: Programmatic Job & Recruitment Advertising — Research Guide.
Frequently Asked Questions — Click Fraud Protection in Programmatic Job Advertising · Sevilla La Nueva, Spain
Does Expertini's fraud protection actually prevent me from being charged for bot clicks?
Yes — for bots that announce themselves through their User-Agent string, which covers the majority of web crawlers, scraping tools, and automated HTTP clients. Layer 4 maintains a blocklist of known bot signatures and requires that the User-Agent contain a recognised browser identifier. Bots that use standard or self-identifying User-Agents are intercepted and redirected without charge.
However, sophisticated bots that spoof a full browser User-Agent string — including some commercial scraping services and headless browser frameworks operating in stealth mode — may evade Layer 4 detection. This is an acknowledged limitation, not a hidden one. The deduplication layer (L6) provides partial mitigation by catching repeated clicks from the same IP, but a bot network using distributed IPs would evade this too. No click fraud protection system in the industry provides complete protection against sophisticated distributed bot networks.
What happens to my remaining budget when I cancel a campaign?
When you cancel an active or paused Expertini programmatic campaign, the system calculates clicks_remaining × rate_per_click and initiates an automatic Stripe refund for that exact amount. This is not a credit — it is a return to your original payment method. The refund is processed at the moment of cancellation and typically clears within 5–10 business days depending on your bank or card issuer. There is no cancellation fee and no minimum campaign duration. Separately, any foreign clicks and expired-period clicks that were intercepted during the campaign were never charged in the first place, so there is no reconciliation needed for those categories.
How does the 24-hour deduplication work and what counts as a duplicate?
A duplicate is defined as a second click on the same job from the same IP address within any rolling 24-hour window. The deduplication key is computed as the SHA-256 hash of the concatenation of three values: the visitor's IP address, the campaign's UTM key (unique per order), and the job slug (unique per job listing). If a deduplication record with this hash exists and was created within the last 24 hours, the incoming click is redirected without charge. The candidate still reaches the job page normally.
This definition means that a genuine job seeker who clicks a sponsored listing, considers the role, and then returns via the same tracking URL later the same day will not be charged twice. After 24 hours, the deduplication record expires and a new click from the same IP would be counted — this is intentional, as returning to a job the next day may indicate renewed interest. The SHA-256 hashing means the deduplication system does not expose raw IP addresses as index keys, though the IP is stored within the deduplication document body for audit purposes.
Can I see how many foreign clicks were intercepted on my campaign?
Yes. Foreign clicks are tracked in a dedicated foreign_clicks counter on the order document, which is displayed in the campaign detail page of the employer dashboard. The downloadable PDF performance report also includes a Foreign Country Clicks section showing the total count and an explanation of what it means. Each foreign click generates an audit record in the deduplication index tagged with type: "foreign", recording the visitor's country of origin, IP address, referrer, and timestamp — providing an accountability trail for discussions with partner networks about traffic quality.
If a partner network keeps serving my ads after my campaign expires, will I be charged for those clicks?
No. There are two layers that prevent post-expiry charging. Layer 7b checks the order status first — if the campaign has already been marked as expired or completed (which happens when the expiry date is first reached), the click is immediately redirected without any database write. Layer 3 provides a secondary check: even if the status update has not yet propagated (in a race condition), the expiry date comparison in L3 catches clicks arriving after the expiry timestamp. In both cases the result is identical: the click reaches the job page without being counted or charged. The 15-minute feed refresh cadence means partner networks receive updated feeds relatively quickly after a campaign status change, but the click validation system guarantees that any clicks arriving in the propagation window are not billed regardless.
How does Expertini's fraud protection compare to platforms like Appcast or Indeed?
The most meaningful structural difference is that Expertini routes all clicks through its own tracking endpoint rather than relying on partner networks to self-report click counts. This eliminates the principal-agent conflict where the counting entity has a financial interest in the count. Both Appcast and Indeed have sophisticated internal fraud detection systems, but they operate at the platform level — Expertini's employer-visible tracking URL means the employer has an independent source of truth for their click counts.
Where large enterprise platforms like Appcast and Joveo have an advantage is in third-party IVT certification (DoubleVerify, PerformLine integration) and statistical anomaly detection at the partner network level. These capabilities identify sophisticated residential-proxy fraud that IP-level validation alone cannot catch. Expertini does not currently integrate a third-party IVT scoring service, which represents a genuine gap for very high-volume campaigns. For SME hiring campaigns and international campaigns in the hundreds-to-low-thousands of clicks range, the 7-layer system provides meaningful protection without enterprise-tier cost.
Why does the system redirect invalid clicks to the job page rather than showing an error?
The redirect-not-reject design serves two purposes. First, it protects the candidate experience: if a genuine job seeker is incorrectly classified as invalid (a false positive — which any detection system can produce), they still reach the job page rather than seeing an error. The worst outcome for a false positive is that the employer does not pay for that click; the candidate's journey is unaffected. Second, serving error responses to invalid clicks would give bad actors a signal they could use to probe the validation system's boundaries — learning which types of requests trigger errors and adjusting their approach accordingly. Silent redirection provides no actionable feedback to probing systems.
Where can I learn more about Expertini's programmatic advertising platform and launch a campaign?
The full programmatic advertising platform — where you can configure and launch campaigns across 7 partner networks with the 7-layer fraud protection described in this article — is available at /employer/programmatic-job-advertising-outreach/. For a broader scholarly examination of programmatic job advertising technology, partner integrations, competitor comparison, and ROI research, see the companion article: Programmatic Job & Recruitment Advertising — Research Guide. Both resources are designed to give employers in Spain a complete, evidence-based understanding of the technology before making advertising investment decisions.