Documents

Business Insights Report Example

The goal of this analysis was to determine the most prominent way of using Machine Learning (ML) in boosting XXXXXX metrics in XXXX online Match 3 game and to propose different application approaches. Our primary focus was finding a way to increase revenue, thus we observed the virtual payment gold spending distribution and we saw that 44% of all gold spending was due to “Extend Game” type (Figure 1.) out of which 97% were “More Moves”. However, when we observed only the virtual payments from users that made real payments, we saw that 48% of all gold spending was on this feature. Thus, we advocate that boosting the incentive for using “More Moves” will bring the most benefit revenue-wise since around 60% out of all tries when users had enough coin stash to purchase more moves, they failed to complete a level and didn’t purchase.

Figure 1. Gold Spending Distribution

We noticed that, among the users who completed the level, it was more likely for them to use the feature if they were 1 to 3 moves away from completing, which corresponds to the presumption that users are more likely to choose to spend their gold on more moves if they are sure that they will pass the level. Therefore, our focus is to model these situations and determine the best set of features (user and level wise), dynamically set the level complexity for a specific user and increase the incentive for the “more moves” purchase. Our initial plan is to focus on a dynamic tuning of the number of available moves for the level/stage combination. However, this will subsequently be extended to tuning other level/stage parameters as well. Having in mind that the user needs to feel that he can complete the level after the purchase, we will avoid giving “the harder the better” as an output.

In order to better understand the situations in which users tend to use “more moves”, we observed the correlations and patterns between available features as well as the ones derived from these in situations in which the users used this feature and when they didn’t. However, we advocate that these patterns and situations will be best described with ML model, used later to optimize level difficulty.

In order to reduce biases that came from users that were set in situations where they wanted and should have used “more moves” but they didn’t since they didn’t understand the feature that well or, even more probably, they didn’t have enough resources to do it (perhaps paying was not an option for them), we focused more on the users that made real payments or had a significant gold stash, and utilized the “more moves” feature at least once in their lifetime. Moreover, we excluded the data regarding levels with “Time” Mechanic for now and focused only on “Moves”.

Figure 2. Percentage of times More Moves was used

We further confirmed that our focus on data concerning payers was a preferable choice, since payers were twice more likely to “extend moves” (Figure 2.). However, in order to benefit from this selection, we had to ensure that this was not due to the Simpson paradox. In this case, it would mean that, when observed per level, payers didn’t have a higher probability of using “more moves”. This could be the case, since payers are more likely to stick to the game and thus reach higher levels which are on average more difficult and more often requiring extra moves. However, as we can see, this was not the case. Payers did have a higher probability of using this feature conditioned on level and on level/stage combination as well. Furthermore, as users progress, we see a positive trend in feature usage.

We are also aware that this is partially due to self selection, given that users who fancy the feature and find it useful are more likely to become payers. Since these users are the ones that we want to target more with challenging levels, it makes sense to put them in the focus of our analysis. As part of future efforts, we would explore the upselling of this feature. For example, based on data provided, a user could receive a suggestion that it is more useful for him to purchase 10 or more moves for a slightly lower price rather than risk spending 5 moves without reaching the Level Goal. Besides revenue, this will impact user experience since failing to complete a level after purchasing more moves increases the level of frustration which later negatively reflects on retention.

Key Findings

  • 44% of all spending of all players is on the “More Moves” feature
  • 48% of all spending by paying players is on the “More Moves” feature
  • 60% of times when “More Moves” would help, players did not buy it (maybe put more emphasis on this feature or provide a better tutorial?)
  • Players who use “More Moves” once will use it more often

Proposed project

We propose to carry out a project whose goal is to increase revenue by monetizing more on the “More Moves” feature since we have identified it as the biggest revenue driver and perceive that there is much more potential in this feature.

Our plan is to increase the incentive for purchasing it by tuning level difficulty so users are more likely to observe this feature as a valuable investment. In other words, we would increase the number of situations where users use all the available moves and see that they need just a few more to complete the level. The update to level difficulty would be through a change in the number of initially available moves (subsequently and in other parameters as well) on a specific stage/level combination based on his previous in-game behavior, level/stage characteristics and the behaviour of similar users captured in the Machine Learning model.

Predicting the number of moves for an upcoming level has two main advantages:

  • It will be more likely for the player to buy “More Moves” to complete the level since he is near the goal
  • It will make gameplay more entertaining (less easy levels where you finish a game with 20 moves or so, which can be the reason for churn)
  • We can use knowledge to offer not the static 5 more moves, but a number of moves that has the highest chance for the player to complete the level

Proposed application alternatives

  • This application would use available data to determine how many moves should be given to a user on a specific level and stage combination (and subsequently what other set of level parameters should be given to the user) in order to increase the probability of him purchasing “more moves”. The model will be retrained in scheduled time periods (e.g. once per day) in order to capture behavioural changes that come with new cohorts and level difficulty updates. Furthermore, per user predictions will be done as soon as data for that user becomes available and level difficulty updates will be sent whenever possible (e.g. the user is playing in online mode). Time between two consecutive updates can be configurable and dependent on performance limitations. If level difficulty update is unavailable, we will have a recalibrated fallback level difficulty setting which will be optimized to reduce bonus harvest with the least possible or even no impact on the level completion percentage. This difficulty update does not necessarily have to be applied to each level. Some levels could be left for “relief” as they are now. 
    • If we can collect additional information about the level completion percentage (per goal completion) during the last 3 (or more) moves before level end, we will have better estimates about the moves left. Since we currently rely on data from situations where the user used “more moves” and completed the level and information on “bonus harvest” (moves left upon completion) to determine how many moves the user actually needed to finish the level. Furthermore, we utilize data regarding the initial number of moves available and goals related to completion percentage to obtain a useful estimate of the moves needed for level completion. This estimate could be improved if we had more info about user progress during the last moves.
  • Another application conceptually similar to previous with the difference that the model is placed within the app and is occasionally updated (e.g. once a month). We can benefit from this approach since we will have most recent information provided to the model. Although this will increase technical complexity, this concept is well established as part of iOS app development.