Contact us
Please fill out the form and we will contact you shortly
Свяжитесь с нами
Заполните форму и мы свяжемся с вами в ближайшее время
based on Bayesian statistics

Proba is a tool for A/B testing
in mobile apps

Create chart-hitting mobile products with Proba! Carry out experiments faster, and at a better price — using the mobile app product hypothesis testing tool.
based on Bayesian statistics

Proba is a tool for A/B testing
in mobile apps

Create chart-hitting mobile products with Proba! Carry out experiments faster, and at a better price — using the mobile app product hypothesis testing tool.

A flexible tool for:

Product owners

Мobile marketers

Аnalysts

App developers

Test product assumptions

Test what helps you earn and improves your mobile apps: subscription screens (paywalls), onboarding, CTA, special offerings, gameplay, tariff plans, UI/UX solutions, interface elements (buttons, colors, arrangement, etc.), new functions, payment methods, and other features.
Make informed decisions based on real-time
statistics and grow your mobile app with AB testing.

Optimize for conversions, revenue, and ARPU

Experiment with optimization focusing not only on conversions but also ARPU, revenue, and target actions (ad views, recurrent purchases, orders, game progress, app launches, etc.)
Partners that entrust their
products to us
Manually specify traffic distribution shares by option. Redistribute traffic on the go, without interrupting the experiment. Use classic A/B tests when it's important to control the share of the focus group.

Run classic A/B tests

Evaluate results in terms of different traffic sources. Various solutions may work better on certain traffic sources. Monitor performance and reach the maximum conversions on every channel!

Segment traffic by source

Get compelling experiment results

View helpful reports with experiment results. Publish top-performing options in one click,
in real-time.

Simple integration and built-in debugging mode

SDK is available for iOS, Android (Kotlin, Java), Unity, React Native, and Flutter. Check correctness of operation of tested options in debugging mode.

Integrate A/B testing with your analytics system

Set up integration with analytics and mobile attribution systems that you are already using in your mobile apps (e.g., Amplitude, Appsflyer, Adjust, Dev2dev or Appmetrica).

An all-in-one solution for A/B testing of mobile products

We have taken care of your users: the size of the SDK is only 400 KB, which will not affect the performance and size of your Android or iOS app

Simple SDK integration

Your personal manager will help
you design a perfect experiment: from launching and tracking to interpreting the final results

Responsive support

Publish experiment findings in real-time, without waiting for a release in your mobile apps

Real-time updates

The built-in integration and check mode helps inspect correctness
of operation of testing options

Handy debugging mode

Automatic traffic distribution
Smart distribution algorithms
based on Bayesian statistics

Automatic traffic distribution

Our service is compatible with Amplitude, Appsflyer, Adjust, Dev2dev, Appmetrica

Integration with analytics systems

Select audience segments
to test

Audience segmentation

Find out which AB testing option
delivers more profit

Optimization for ARPU

During the experiment, the algorithm automatically redistributes the users between the options so the "winning" option gets a major portion of traffic. This helps save up to 80% of traffic and make substantiated decisions faster.

Trust Bayesian statistics

FAQ

How does user distribution in A/B testing work?
The service offers manual and automatic traffic distribution between testing options.

Using manual (classic) distribution, you determine shares of traffic volume for each traffic option on your own. For instance, 50/50 or 70/30. 

Manual distribution works great when you need to control the share of the audience on which you test a new feature — for instance, in classic A/B tests.

Automatic distribution is an algorithm that picks out a testing option demonstrating better performance and drives the best part of traffic to it. What makes this method beneficial is that the algorithm can adjust distribution on the go, without interrupting the experiment. 

Down the road, you get more conversions during the test. It helps save time and earn more than with classic A/B testing. This distribution technique is underpinned by Bayesian statistics and Thompson sampling.

What is the difference between automatic and manual distribution?
In classic A/B testing, users are divided into groups; all groups are shown different options. After some time, the experiment is evaluated by a specific metric (e.g., conversions into purchase). In this model, traffic shares remain the same throughout the experiment. 

The algorithm that automatic distribution relies on distributes traffic dynamically. During the test, the algorithm continuously analyzes performance of options in terms of the selected metric and distributes users accordingly: the better result an option delivers, the more traffic is driven to it. 

For instance, you test subscription screens and have selected conversions into purchase as the key metric. You have 4 paywall options, all traffic is evenly distributed between them. 
After some — preset — time after the launch, Paywall 1's performance looks better than the others. In this case, the algorithm allocates more users to this paywall and continues to analyze the metrics. If another option starts showing a higher conversion rate, the algorithm will redistribute traffic again.

The algorithm collects the metrics it needs and revenue data on its own. In such tests, poorly performing options are shown less to users — thus, experiment costs decrease. 
How does A/B testing with automatic distribution make money faster?
We conducted an A/B test of paywalls using manual traffic distribution for our client from the dating vertical. Based on the results, we used the best performing subscription screen. 
Some time later, we fed historic data to the automatic distribution system — to see how the algorithm would distribute traffic.

Results:  

-Perfect option: 1,070 (100%). This is the number of purchases we could have generated if we guessed the best option at the very beginning and applied it without further testing.
-Historic: 745 purchases (69%). This is the number of purchases we generated with manual distribution.
-Thompson sampling: 935 (87%). This is the number of purchases that the automatic distribution algorithm could have delivered.

It turns out that the algorithm would have handled the task better. According to the test result, the option that we picked out during manual distribution, would be recognized the best by the automatic algorithm. But for the same money that was actually spent, the client could have generated more purchases at a lower CAC. In monetary terms, the project missed some $7,500 on the focus group for 5 days.
How we lost $7,500 on mobile app A/B tests but learned how to conduct them
Why do I need to segment traffic when carrying out A/B testing?
Users on different channels, in different regions, using different devices may behave differently in the app. 

A great case from our experience: for one of our clients, an app, we bought traffic from Facebook and Snapchat. The test showed that Facebook campaigns were better optimized for conversions into trials, while major in-app purchases came from Snapchat. 
This is why you need to keep in mind the user acquisition specifics and segment traffic when carrying out A/B testing. Our tool helps segment traffic by source, region, device, operating system, and other parameters.

Who would benefit from the A/B testing service?
  • Those who continuously put new hypotheses to the test and want to automate proofing;
  • Those who make informed, statistics-inspired decisions about a product;
  • Those who want to cut costs on testing hypotheses and accelerate this process.