Skip to content

Impact of bidding script size #1347

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
fabricegaignier opened this issue Nov 26, 2024 · 1 comment
Open

Impact of bidding script size #1347

fabricegaignier opened this issue Nov 26, 2024 · 1 comment

Comments

@fabricegaignier
Copy link

We do observe that around 30% of the KV server calls are not followed by an execution of our bidding script (generateBid). The execution of generateBid, at its very beginning registers forDebuggingOnly.reportAdAuctionLoss so that we are sure to receive a notification server side if generateBid is triggered.

We investigated the impact of the size of the bidding script on that phenomenon.
For that we conducted two tests:

The first test is to increase the size of the production bidding script.

For that we add a dummy payload to the script. This payload can be of an arbitrary size.
We consider the size as seen server side before any compression mechanism.
The production bidding script if around 100KB

We use the no-cache response header, indicating that the script can be stored in caches, but the response must be validated with the origin server before each reuse.

We tested adding 100KB, 200KB, 400KB and 800KB

The second test is to reduce the size of the bidding script. For that we build a minimal bidding script where we remove all logic in generateBid. It only registers a call to forDebuggingOnly.reportAdAuctionLoss and returns a constant bid (-1). reportWin does nothing.

This minimal script is of size 1KB. We still have the capacity to add a dummy payload to it.
We tested with script sizes 1KB (the minimal script) and 25KB (the minimal script plus a dummy payload of 24KB)

The test ensures we have the same number of calls to the bidding script endpoint in the reference population and in the test population.

We measure the relative increase in participation as : (number of reports received in TEST - number of reports received in REF)/(number of reports received in REF)

The results show a clear relationship between the size of the bidding script and the number of generateBid executions :

size (KB) relative increase in participation
1 +11.5%
25 +7.6%
100 (current size = reference) 0
200 -24%
300 -26%
500 -37%
900 -46%

This causes several issues

  1. We are limited in improving our bidding logic and added value for our customers.
  2. We suffer from a loss of opportunity while still fully supporting the infra-cost for these missed opportunities. The KV server is called while the bidding script is loading.

Are there any mitigation proposal to minimize this effect?
And more generally what would be the possible reporting solutions to measure why we lose opportunities (the size of the bidding script being one of them)?

@JensenPaul
Copy link
Collaborator

We do observe that around 30% of the KV server calls are not followed by an execution of our bidding script

This can be expected if the seller aborts the auction before fulfilling all auction configuration Promises.

Increasing the size of the bidding script is going to increase the time it takes to download the script which is going to increase the percentage of time a bidder hits their perBuyerCumulativeTimeout. This will be exacerbated on mobile devices which generally have slower and less reliable internet connections.

Are there any mitigation proposal to minimize this effect?

  1. As explained in this section of the improving Protected Audience latency site, using cache control headers is critical to mitigate the effects of slow and unreliable network connections. Note that validating cache entries (e.g. when no-cache is used), even with Etags, still incurs a network round trip which, on high latency networks, can incur nearly as much latency as not using caching. Stale-while-revalidate support is implemented in Chrome and available in 134.0.6957.0; it provides a good compromise between latency and freshness.

  2. Sellers parallelizing on-device auctions with their contextual auctions should give buyers significantly more time to fetch their bidding scripts, namely at least one more network round-trip-time plus the time it takes to execute a real-time-bidding auction.

  3. We are experimenting with ways to reduce the cost of fetching bidding scripts, for example by preconnecting.

  4. Moving more of the logic from the bidding script into the trusted bidding signals key-value server (e.g. into user-defined functions) is one way to reduce the need to have complex logic or tables or data in the on-device bidding script and hence reduce its size.

And more generally what would be the possible reporting solutions to measure why we lose opportunities (the size of the bidding script being one of them)?

  1. Some of the newly added per-participant base values for Private Aggregation may be useful here in cases where some/any of your interest groups’ generateBid() functions are invoked. Some of these metrics may give insight into what’s happening in cases where none of your interest groups’ generateBid() is not invoked, for example if average-code-fetch-time regularly approaches the overall timeout in some of the branches of your experiment with larger scripts.

  2. The real-time-monitoring “Scoring script fetch error” bucket might also be useful to measure to see if there are network failures occurring.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants