Summary
This article explores the intersection of momentum and artificial intelligence in predicting NFL teams' success on 4th downs, highlighting its implications for coaches and fans alike. Key Points:
- Momentum in NFL games significantly affects player performance and coaching decisions, making it essential to quantify using advanced statistical techniques.
- Machine learning helps identify key variables influencing 4th down decision-making, such as field position and score differential, offering insights into coaches' strategies.
- A predictive model utilizing algorithms like Random Forest predicts the success of 4th down conversions based on various game-specific factors.
The method outlined in this article can be seamlessly implemented and executed in real-time. There is absolutely no justification for a coach to make poor decisions on fourth down. Understanding when to take the risk on fourth down ceases to be enigmatic if one measures the game's momentum accurately.
With 7:32 remaining in the NFC Championship game, facing a fourth-and-three situation from the Niners' 30-yard line, the Lions attempted but failed to convert. According to my model, their chances of success at that precise moment were less than 20%, meaning there was an 80% likelihood of failure. Would Dan Campbell have made the same choice had he been equipped with this data?
Key Points Summary
- Sports data analytics involves analyzing and interpreting large volumes of data generated from sports activities.
- Teams leverage historical statistics to gain a competitive edge and enhance their performance.
- Analytics includes evaluating data signals, video footage, and predictive metrics like goals and assists.
- Educational programs like the MSc in Sport Data Analytics at Strathclyde focus on developing technical skills for complex data analysis.
- Understanding fan behavior through analytics helps sports organizations engage better with their audience.
- Sports media companies utilize analytics to improve reporting and increase viewer engagement.
In today`s sports world, data analytics is becoming a game-changer. It allows teams not only to boost their performance by understanding the numbers behind the game but also to connect more deeply with fans. As we see more emphasis on using technology in sports, it’s exciting to think about how this will shape our experiences as fans and how teams can play smarter.
Extended Comparison:Aspect | Traditional Analytics | Machine Learning Analytics | Predictive Metrics | Fan Engagement Strategies |
---|---|---|---|---|
Data Sources | Historical statistics and game footage | Real-time data, player tracking, and social media interactions | Goals, assists, turnovers, and player efficiency ratings | Surveys, social media sentiment analysis, and ticket sales data |
Analysis Techniques | Descriptive statistics and basic forecasting methods | Advanced algorithms like neural networks and decision trees | Regression analysis for performance prediction and risk assessment | Personalized content delivery based on user behavior patterns |
Applications in Sports Teams | Game strategy development and post-game reviews | In-game decision making for 4th down scenarios using AI predictions | Talent scouting through predictive modeling of player potential | Tailored promotions based on fan preferences |
Trends in Sports Media Companies | Basic reporting with limited interactivity | Interactive infographics powered by real-time analytics | Enhanced storytelling through predictive insights into games | Engagement through dynamic content aligning with fan interests |
Harnessing the Power of Momentum for Organizational Success
Momentum is a dynamic force that can significantly influence performance and behavior. It oscillates between positive and negative phases, each offering distinct insights into the factors driving these changes. By carefully observing these momentum patterns, one can gain a deeper understanding of the underlying elements affecting outcomes.Recognizing and leveraging momentum allows organizations to refine their strategies proactively. When positive momentum is identified, it can be harnessed to boost performance and make more informed decisions. Conversely, understanding periods of negative momentum enables timely interventions to mitigate adverse effects and enhance resilience against challenges.
Incorporating this understanding of momentum dynamics into organizational practices leads to improved overall performance. It empowers decision-makers with the ability to anticipate shifts in momentum, adapt accordingly, and maintain a competitive edge in an ever-changing environment.
As a data scientist, I create metrics that reflect momentum in sports and utilize these metrics to make probabilistic forecasts about future outcomes. Momentum in sports can be quite elusive; you can recognize it when it’s there, yet predicting its fluctuations—whether it will increase, decrease, or shift altogether—remains challenging. Many skeptics argue that momentum is merely an illusion.
Momentum Analytics Empowers Strategic Decision-Making in Sports
Momentum plays a crucial role in decision-making processes, particularly in the context of sports analytics. By quantifying momentum, coaches can gain valuable insights that enhance their strategic choices during critical moments, such as 4th down conversions. An effective way to capture and analyze game momentum is through predictive models that utilize various momentum variables. These models have shown remarkable precision in forecasting the likelihood of successful conversions, thereby empowering coaches with data-driven guidance at pivotal times on the sideline.The integration of momentum-based analytics into sports not only improves real-time decision-making but also opens new avenues for further research and development. As these advanced predictive models evolve, they hold the potential to significantly transform how teams approach game-day strategies, allowing analysts and coaches alike to make more informed decisions based on quantitative assessments of momentum dynamics throughout the game.
!pip install nfl_data_py !pip install xgboost
Import the essential Python libraries.
import nfl_data_py as nfl import pandas as pd import numpy as np from sklearn import metrics from sklearn.preprocessing import LabelEncoder import xgboost as xgb from xgboost.sklearn import XGBClassifier from xgboost import plot_importance import types import plotly.graph_objs as go import os, types import plotly as plotly
Import data from NFL using the python library.
x=nfl.import_pbp_data([1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011, 2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022,2023], downcast=True, cache=False, alt_path=None)
Only include the plays that were either a pass or a run.
x=x[(x['play_type']=='pass') | (x['play_type']=='run')]
Select the fields will need for our model and reference fields.
x=x[['play_id', 'game_id', 'home_team', 'game_seconds_remaining', 'away_team', 'season_type', 'week', 'side_of_field', 'yardline_100', 'game_date','posteam', 'game_half', 'drive', 'qtr', 'down', 'time', 'yrdln', 'ydstogo','ydsnet', 'play_type', 'yards_gained', 'home_timeouts_remaining','away_timeouts_remaining', 'total_home_score', 'fumble_lost', 'total_away_score','interception', 'home_wp', 'away_wp','vegas_home_wp', 'fourth_down_converted', 'touchdown', 'fourth_down_failed', 'season', 'series', 'penalty_team', 'penalty_yards','rush_attempt','pass_attempt']]
Create a metric by multiplying the yards gained by negative one. This will serve as a foundational element for developing additional features.
x['NEGATIVE_YARDS']=x['yards_gained']*(-1)
Design a banner that distinguishes between the home team and the visiting team.
x['HOME_OFFENSE']=np.where(((x.home_team == x.posteam)), 1, 0) x['AWAY_OFFENSE']=np.where(((x.away_team == x.posteam)), 1, 0)
Identify Home Rushing and Away Rushing yards.
x['HOME_RUSH_YARDS']=x['HOME_OFFENSE']*x['yards_gained']*x['rush_attempt'] x['AWAY_RUSH_YARDS']=x['AWAY_OFFENSE']*x['yards_gained']*x['rush_attempt']
x['AWAY_PASS_YARDS']=x['AWAY_OFFENSE']*x['yards_gained']*x['pass_attempt'] x['HOME_PASS_YARDS']=x['HOME_OFFENSE']*x['yards_gained']*x['pass_attempt']
x['HOME_OFFENSE_YARDS']=x['HOME_OFFENSE']*x['yards_gained'] x['AWAY_OFFENSE_YARDS']=x['AWAY_OFFENSE']*x['yards_gained']
Identify home and away fumbles and interceptions.
x['HOME_FUMBLES_LOST']=x['HOME_OFFENSE']*x['fumble_lost'] x['AWAY_FUMBLES_LOST']=x['AWAY_OFFENSE']*x['fumble_lost'] x['HOME_INTERCEPTION']=x['HOME_OFFENSE']*x['interception'] x['AWAY_INTERCEPTION']=x['AWAY_OFFENSE']*x['interception']
Identify home and away touchdowns.
x['HOME_TD']=x['HOME_OFFENSE']*x['touchdown'] x['AWAY_TD']=x['AWAY_OFFENSE']*x['touchdown']
Identify Home and Away net yards.
x['HOME_NET_YARDS']=np.where(((x.HOME_OFFENSE == 1)), x.yards_gained,x.NEGATIVE_YARDS) x['AWAY_NET_YARDS']=np.where(((x.AWAY_OFFENSE == 1)), x.yards_gained,x.NEGATIVE_YARDS)
Identify Home and Away penalties and penalty yards.
x['HOME_PENALTY']=np.where(((x.home_team == x.penalty_team)), 1,0) x['AWAY_PENALTY']=np.where(((x.away_team == x.penalty_team)), 1,0) x['HOME_PENALTY_YARDS']=x['HOME_PENALTY']*x['penalty_yards'] x['AWAY_PENALTY_YARDS']=x['AWAY_PENALTY']*x['penalty_yards']
Focus on identifying significant positive plays, specifically those that exceed distances of 10, 20, 30, and 50 yards.
x['BIG_10'] = np.where((x.yards_gained>10), 1, 0) x['BIG_20'] = np.where((x.yards_gained>20), 1, 0) x['BIG_30'] = np.where((x.yards_gained>30), 1, 0) x['BIG_50'] = np.where((x.yards_gained>50), 1, 0)
Identify significant positive plays, specifically those exceeding 10, 20, 30, and 50 yards for both the home and away teams.
x['HOME_BIG_10'] = x['BIG_10']*x['HOME_OFFENSE'] x['HOME_BIG_20'] = x['BIG_20']*x['HOME_OFFENSE'] x['HOME_BIG_30'] = x['BIG_30']*x['HOME_OFFENSE'] x['HOME_BIG_50'] = x['BIG_50']*x['HOME_OFFENSE'] x['AWAY_BIG_10'] = x['BIG_10']*x['AWAY_OFFENSE'] x['AWAY_BIG_20'] = x['BIG_20']*x['AWAY_OFFENSE'] x['AWAY_BIG_30'] = x['BIG_30']*x['AWAY_OFFENSE'] x['AWAY_BIG_50'] = x['BIG_50']*x['AWAY_OFFENSE']
Identify big negative plays and associate them with the home team or the away team.
x['NEG_00'] = np.where((x.yards_gained<0), 1, 0) x['NEG_05'] = np.where((x.yards_gained<-5), 1, 0) x['NEG_10'] = np.where((x.yards_gained<-10), 1, 0) x['NEG_15'] = np.where((x.yards_gained<-15), 1, 0) x['HOME_NEG_00'] = x['NEG_00']*x['HOME_OFFENSE'] x['HOME_NEG_05'] = x['NEG_05']*x['HOME_OFFENSE'] x['HOME_NEG_10'] = x['NEG_10']*x['HOME_OFFENSE'] x['HOME_NEG_15'] = x['NEG_15']*x['HOME_OFFENSE'] x['AWAY_NEG_00'] = x['NEG_00']*x['AWAY_OFFENSE'] x['AWAY_NEG_05'] = x['NEG_05']*x['AWAY_OFFENSE'] x['AWAY_NEG_10'] = x['NEG_10']*x['AWAY_OFFENSE'] x['AWAY_NEG_15'] = x['NEG_15']*x['AWAY_OFFENSE']
Examine the relationship between successful and unsuccessful fourth down conversions in both home and away games.
x['HOME_4TH_CONV']=x['fourth_down_converted']*x['HOME_OFFENSE'] x['AWAY_4TH_CONV']=x['fourth_down_converted']*x['AWAY_OFFENSE'] x['HOME_4TH_FAIL']=x['fourth_down_failed']*x['HOME_OFFENSE'] x['AWAY_4TH_FAIL']=x['fourth_down_failed']*x['AWAY_OFFENSE']
Associate rushing and passing attempts with the home and away team.
x['HOME_RUSH_ATTEMPT']=x['rush_attempt']*x['HOME_OFFENSE'] x['AWAY_RUSH_ATTEMPT']=x['rush_attempt']*x['AWAY_OFFENSE'] x['AWAY_PASS_ATTEMPT']=x['pass_attempt']*x['AWAY_OFFENSE'] x['HOME_PASS_ATTEMPT']=x['pass_attempt']*x['HOME_OFFENSE']
To ensure the data is organized correctly, it's essential to reset the index so that it arranges the information by season, game_id, play_id, and the remaining seconds in the game.
q=x.sort_values(by=['season','game_date','game_id','play_id','game_seconds_remaining'],ascending=[True,True,True,True,False]) q=q.reset_index() q=q.rename(index=str, columns={"index": "wilma"}) q['wilma'] = q['wilma'].astype(str).astype(int) q['wilma']=q['wilma']+1
In order to find the minimum index for every match, it is essential to pinpoint this key figure, which will be instrumental in constructing further attributes.
result = pd.DataFrame(q.groupby('game_id').agg({'wilma': ['min']})) result=result.reset_index() result=result.set_axis(['game_id', 'wilma_min'], axis='columns')
Reintegrate the lowest game_id into every relevant column of the original dataframe.
q = pd.merge(q,result, how="inner", on=["game_id"])
Ensure the file is sorted correctly
q=q.sort_values(by=['season','game_date','game_id','play_id','game_seconds_remaining'],ascending=[True,True,True,True,False])
Develop short- to medium-term derived momentum metrics. For each of the categories we established, we will aggregate the data from the last 5, 10, 15, and 30 plays.
pebbles=[5,10,15,30] for pebbles in pebbles: print(pebbles); fred='_'+str(pebbles) q['too_soon'] = np.where((q.wilma-q.wilma_min<=pebbles), 1, 0) q['HOME_NET_YARDS_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_NET_YARDS'].rolling(min_periods=1, window=pebbles).mean()) , -9999) q['HOME_OFFENSE_YARDS_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_OFFENSE_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_OFFENSE_YARDS_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_OFFENSE_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_NET_YARDS_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_NET_YARDS'].rolling(min_periods=1, window=pebbles).mean()) , -9999) q['HOME_OFFENSE_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_OFFENSE'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_OFFENSE_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_OFFENSE'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_OFFENSE_PCT'+fred] = np.where((q.too_soon == 0),(q['AWAY_OFFENSE_SUM'+fred]/(q['AWAY_OFFENSE_SUM'+fred]+q['AWAY_OFFENSE_SUM'+fred])) , -9999) q=q.copy() q['AWAY_BIG_10_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_BIG_10'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_BIG_20_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_BIG_20'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_BIG_30_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_BIG_30'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_BIG_50_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_BIG_50'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_NEG_00_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_NEG_00'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_NEG_05_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_NEG_05'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_NEG_10_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_NEG_10'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_NEG_15_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_NEG_15'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q=q.copy() q['HOME_BIG_10_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_BIG_10'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_BIG_20_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_BIG_20'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_BIG_30_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_BIG_30'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_BIG_50_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_BIG_50'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_NEG_00_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_NEG_00'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_NEG_05_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_NEG_05'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_NEG_10_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_NEG_10'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_NEG_15_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_NEG_15'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_FUMBLES_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_FUMBLES_LOST'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_FUMBLES_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_FUMBLES_LOST'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_INTERCEPTION_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_INTERCEPTION'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_INTERCEPTION_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_INTERCEPTION'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_TURNOVER_SUM'+fred]=q['HOME_INTERCEPTION_SUM'+fred]+q['HOME_FUMBLES_SUM'+fred] q['AWAY_TURNOVER_SUM'+fred]=q['AWAY_INTERCEPTION_SUM'+fred]+q['AWAY_FUMBLES_SUM'+fred] q['HOME_TD_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_TD'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_TD_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_TD'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q=q.copy() q['HOME_PENALTY_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_PENALTY'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_PENALTY_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_PENALTY'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_PENALTY_YDS_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_PENALTY_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_PENALTY_YDS_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_PENALTY_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_RUSH_YDS_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_RUSH_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_RUSH_YDS_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_RUSH_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_RUSH_PLAYS_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_RUSH_ATTEMPT'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_RUSH_PLAYS_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_RUSH_ATTEMPT'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_RUSH_YDS_PLAY'+fred]= np.where((q.too_soon == 0),(q['HOME_RUSH_YDS_SUM'+fred]/q['HOME_RUSH_PLAYS_SUM'+fred]) , -9999) q['AWAY_RUSH_YDS_PLAY'+fred]= np.where((q.too_soon == 0),(q['AWAY_RUSH_YDS_SUM'+fred]/q['AWAY_RUSH_PLAYS_SUM'+fred]) , -9999) q['AWAY_PASS_YDS_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_PASS_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_PASS_YDS_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_PASS_YARDS'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['AWAY_PASS_PLAYS_SUM'+fred] = np.where((q.too_soon == 0),(q['AWAY_PASS_ATTEMPT'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_PASS_PLAYS_SUM'+fred] = np.where((q.too_soon == 0),(q['HOME_PASS_ATTEMPT'].rolling(min_periods=1, window=pebbles).sum()) , -9999) q['HOME_PASS_YDS_PLAY'+fred]= np.where((q.too_soon == 0),(q['HOME_PASS_YDS_SUM'+fred]/q['HOME_PASS_PLAYS_SUM'+fred]) , -9999) q['AWAY_PASS_YDS_PLAY'+fred]= np.where((q.too_soon == 0),(q['AWAY_PASS_YDS_SUM'+fred]/q['AWAY_PASS_PLAYS_SUM'+fred]) , -9999) q['HOME_4TH_CONV'+fred] = np.where((q.HOME_4TH_CONV ==1),0, np.where((q.too_soon == 0),(q['HOME_4TH_CONV'] .rolling(min_periods=1, window=pebbles).sum()) , -9999)) q['AWAY_4TH_CONV'+fred] = np.where((q.AWAY_4TH_CONV ==1),0, np.where((q.too_soon == 0),(q['AWAY_4TH_CONV'] .rolling(min_periods=1, window=pebbles).sum()) , -9999)) q['HOME_4TH_FAIL'+fred] = np.where((q.HOME_4TH_CONV ==1),0, np.where((q.too_soon == 0),(q['HOME_4TH_FAIL'] .rolling(min_periods=1, window=pebbles).sum()) , -9999)) q['AWAY_4TH_FAIL'+fred] = np.where((q.AWAY_4TH_CONV ==1),0, np.where((q.too_soon == 0),(q['AWAY_4TH_FAIL'] .rolling(min_periods=1, window=pebbles).sum()) , -9999)) q=q.copy()
q=q.sort_values(by=['season','game_date','game_id','play_id','game_seconds_remaining'],ascending=[True,True,True,True,False])
Develop situational momentum variables for the game. This time, accumulate data progressively as the match unfolds.
q['HOME_NET_YARDS_SUM'] = q.groupby(['game_id'])[['HOME_NET_YARDS']].cumsum() q['HOME_OFFENSE_YARDS_SUM'] = q.groupby(['game_id'])[['HOME_OFFENSE_YARDS']].cumsum() q['AWAY_NET_YARDS_SUM'] = q.groupby(['game_id'])[['AWAY_NET_YARDS']].cumsum() q['AWAY_OFFENSE_YARDS_SUM'] = q.groupby(['game_id'])[['AWAY_OFFENSE_YARDS']].cumsum() q['HOME_OFFENSE_SUM'] = q.groupby(['game_id'])[['HOME_OFFENSE']].cumsum() q['AWAY_OFFENSE_SUM'] = q.groupby(['game_id'])[['AWAY_OFFENSE']].cumsum() q['AWAY_4TH_FAIL_SUM'] = q.groupby(['game_id'])[['AWAY_4TH_FAIL']].cumsum() q['HOME_4TH_FAIL_SUM'] = q.groupby(['game_id'])[['HOME_4TH_FAIL']].cumsum() q['AWAY_4TH_CONV_SUM'] = q.groupby(['game_id'])[['AWAY_4TH_CONV']].cumsum() q['HOME_4TH_CONV_SUM'] = q.groupby(['game_id'])[['HOME_4TH_CONV']].cumsum() q['HOME_TD_SUM'] = q.groupby(['game_id'])[['HOME_TD']].cumsum() q['AWAY_TD_SUM'] = q.groupby(['game_id'])[['AWAY_TD']].cumsum() q['HOME_PENALTY_SUM'] = q.groupby(['game_id'])[['HOME_PENALTY']].cumsum() q['AWAY_PENALTY_SUM'] = q.groupby(['game_id'])[['AWAY_PENALTY']].cumsum() q['HOME_PENALTY_YDS_SUM'] = q.groupby(['game_id'])[['HOME_PENALTY_YARDS']].cumsum() q['AWAY_PENALTY_YDS_SUM'] = q.groupby(['game_id'])[['AWAY_PENALTY_YARDS']].cumsum() q['AWAY_RUSH_YDS_SUM'] = q.groupby(['game_id'])[['AWAY_RUSH_YARDS']].cumsum() q['HOME_RUSH_YDS_SUM'] = q.groupby(['game_id'])[['HOME_RUSH_YARDS']].cumsum() q['HOME_RUSH_PLAYS_SUM'] = q.groupby(['game_id'])[['HOME_RUSH_ATTEMPT']].cumsum() q['AWAY_RUSH_PLAYS_SUM'] = q.groupby(['game_id'])[['AWAY_RUSH_ATTEMPT']].cumsum() q['AWAY_PASS_YDS_SUM'] = q.groupby(['game_id'])[['AWAY_PASS_YARDS']].cumsum() q['HOME_PASS_YDS_SUM'] = q.groupby(['game_id'])[['HOME_PASS_YARDS']].cumsum() q['HOME_PASS_PLAYS_SUM'] = q.groupby(['game_id'])[['HOME_PASS_ATTEMPT']].cumsum() q['AWAY_PASS_PLAYS_SUM'] = q.groupby(['game_id'])[['AWAY_PASS_ATTEMPT']].cumsum() q['HOME_PASS_PLAYS_SUM'] = q.groupby(['game_id'])[['HOME_PASS_ATTEMPT']].cumsum() q['AWAY_PASS_PLAYS_SUM'] = q.groupby(['game_id'])[['HOME_PASS_ATTEMPT']].cumsum() q['AWAY_BIG_10_SUM'] = q.groupby(['game_id'])[['AWAY_BIG_10']].cumsum() q['AWAY_BIG_20_SUM'] = q.groupby(['game_id'])[['AWAY_BIG_20']].cumsum() q['AWAY_BIG_30_SUM'] = q.groupby(['game_id'])[['AWAY_BIG_30']].cumsum() q['AWAY_BIG_50_SUM'] = q.groupby(['game_id'])[['AWAY_BIG_50']].cumsum()
q['HOME_BIG_10_SUM'] = q.groupby(['game_id'])[['HOME_BIG_10']].cumsum() q['HOME_BIG_20_SUM'] = q.groupby(['game_id'])[['HOME_BIG_20']].cumsum() q['HOME_BIG_30_SUM'] = q.groupby(['game_id'])[['HOME_BIG_30']].cumsum() q['HOME_BIG_50_SUM'] = q.groupby(['game_id'])[['HOME_BIG_50']].cumsum()
q['HOME_NEG_00_SUM'] = q.groupby(['game_id'])[['HOME_NEG_00']].cumsum() q['HOME_NEG_05_SUM'] = q.groupby(['game_id'])[['HOME_NEG_05']].cumsum() q['HOME_NEG_10_SUM'] = q.groupby(['game_id'])[['HOME_NEG_10']].cumsum() q['HOME_NEG_15_SUM'] = q.groupby(['game_id'])[['HOME_NEG_15']].cumsum()
q['AWAY_NEG_00_SUM'] = q.groupby(['game_id'])[['AWAY_NEG_00']].cumsum() q['AWAY_NEG_05_SUM'] = q.groupby(['game_id'])[['AWAY_NEG_05']].cumsum() q['AWAY_NEG_10_SUM'] = q.groupby(['game_id'])[['AWAY_NEG_10']].cumsum() q['AWAY_NEG_15_SUM'] = q.groupby(['game_id'])[['AWAY_NEG_15']].cumsum()
q['HOME_FUMBLES_SUM'] = q.groupby(['game_id'])[['HOME_FUMBLES_LOST']].cumsum() q['AWAY_FUMBLES_SUM'] = q.groupby(['game_id'])[['AWAY_FUMBLES_LOST']].cumsum() q['AWAY_INTERCEPTION_SUM'] = q.groupby(['game_id'])[['AWAY_INTERCEPTION']].cumsum() q['HOME_INTERCEPTION_SUM'] = q.groupby(['game_id'])[['HOME_INTERCEPTION']].cumsum() q['HOME_TURNOVER_SUM']=q['HOME_INTERCEPTION_SUM']+q['HOME_FUMBLES_SUM'] q['AWAY_TURNOVER_SUM']=q['AWAY_INTERCEPTION_SUM']+q['AWAY_FUMBLES_SUM']
q['HOME_OFFENSE_PCT'] = (q['AWAY_OFFENSE_SUM']/(q['AWAY_OFFENSE_SUM']+q['AWAY_OFFENSE_SUM']))
q['HOME_RUSH_YDS_PLAY'] = (q['HOME_RUSH_YDS_SUM']/q['HOME_RUSH_PLAYS_SUM']) q['AWAY_RUSH_YDS_PLAY'] = (q['AWAY_RUSH_YDS_SUM']/q['AWAY_RUSH_PLAYS_SUM']) q['HOME_PASS_YDS_PLAY'] = (q['HOME_PASS_YDS_SUM']/q['HOME_PASS_PLAYS_SUM']) q['AWAY_PASS_YDS_PLAY'] = (q['AWAY_PASS_YDS_SUM']/q['AWAY_PASS_PLAYS_SUM'])
Develop variables for angular momentum. Aim to pinpoint scenarios where the latest statistics diverge from established game averages or patterns.
barney=[5,10,15,30] q=q.copy() for barney in barney: print(barney); dino='_'+str(barney) q['D_HOME_RUSH_YDS_PLAY'+dino]=q['HOME_RUSH_YDS_PLAY'+dino]/q['HOME_RUSH_YDS_PLAY'] q['D_HOME_NET_YARDS_SUM'+dino]=q['HOME_NET_YARDS_SUM'+dino]/q['HOME_NET_YARDS_SUM'] q['D_HOME_OFFENSE_YARDS_SUM'+dino]=q['HOME_OFFENSE_YARDS_SUM'+dino]/q['HOME_OFFENSE_YARDS_SUM'] q['D_AWAY_OFFENSE_YARDS_SUM'+dino]=q['AWAY_OFFENSE_YARDS_SUM'+dino]/q['AWAY_OFFENSE_YARDS_SUM'] q['D_AWAY_NET_YARDS_SUM'+dino]=q['AWAY_NET_YARDS_SUM'+dino]/q['AWAY_NET_YARDS_SUM'] q['D_HOME_OFFENSE_SUM'+dino]=q['HOME_OFFENSE_SUM'+dino]/q['HOME_OFFENSE_SUM'] q['D_AWAY_OFFENSE_SUM'+dino]=q['AWAY_OFFENSE_SUM'+dino]/q['AWAY_OFFENSE_SUM'] q['D_HOME_OFFENSE_PCT'+dino]=q['HOME_OFFENSE_PCT'+dino]/q['HOME_OFFENSE_PCT'] q['D_AWAY_BIG_10_SUM'+dino]=q['AWAY_BIG_10_SUM'+dino]/q['AWAY_BIG_10_SUM'] q['D_AWAY_BIG_20_SUM'+dino]=q['AWAY_BIG_20_SUM'+dino]/q['AWAY_BIG_20_SUM'] q['D_AWAY_BIG_30_SUM'+dino]=q['AWAY_BIG_30_SUM'+dino]/q['AWAY_BIG_30_SUM'] q['D_AWAY_BIG_50_SUM'+dino]=q['AWAY_BIG_50_SUM'+dino]/q['AWAY_BIG_50_SUM'] q['D_AWAY_NEG_00_SUM'+dino]=q['AWAY_NEG_00_SUM'+dino]/q['AWAY_NEG_00_SUM'] q['D_AWAY_NEG_05_SUM'+dino]=q['AWAY_NEG_05_SUM'+dino]/q['AWAY_NEG_05_SUM'] q['D_AWAY_NEG_10_SUM'+dino]=q['AWAY_NEG_10_SUM'+dino]/q['AWAY_NEG_10_SUM'] q['D_AWAY_NEG_15_SUM'+dino]=q['AWAY_NEG_15_SUM'+dino]/q['AWAY_NEG_15_SUM'] q=q.copy() q['D_HOME_BIG_10_SUM'+dino]=q['HOME_BIG_10_SUM'+dino]/q['HOME_BIG_10_SUM'] q['D_HOME_BIG_20_SUM'+dino]=q['HOME_BIG_20_SUM'+dino]/q['HOME_BIG_20_SUM'] q['D_HOME_BIG_30_SUM'+dino]=q['HOME_BIG_30_SUM'+dino]/q['HOME_BIG_30_SUM'] q['D_HOME_BIG_50_SUM'+dino]=q['HOME_BIG_50_SUM'+dino]/q['HOME_BIG_50_SUM'] q['D_HOME_NEG_00_SUM'+dino]=q['HOME_NEG_00_SUM'+dino]/q['HOME_NEG_00_SUM'] q['D_HOME_NEG_05_SUM'+dino]=q['HOME_NEG_05_SUM'+dino]/q['HOME_NEG_05_SUM'] q['D_HOME_NEG_10_SUM'+dino]=q['HOME_NEG_10_SUM'+dino]/q['HOME_NEG_10_SUM'] q['D_HOME_NEG_15_SUM'+dino]=q['HOME_NEG_15_SUM'+dino]/q['HOME_NEG_15_SUM'] q=q.copy() q['D_HOME_FUMBLES_SUM'+dino]=q['HOME_FUMBLES_SUM'+dino]/q['HOME_FUMBLES_SUM'] q['D_AWAY_FUMBLES_SUM'+dino]=q['AWAY_FUMBLES_SUM'+dino]/q['AWAY_FUMBLES_SUM'] q['D_HOME_INTERCEPTION_SUM'+dino]=q['HOME_INTERCEPTION_SUM'+dino]/q['HOME_INTERCEPTION_SUM'] q['D_AWAY_INTERCEPTION_SUM'+dino]=q['AWAY_INTERCEPTION_SUM'+dino]/q['AWAY_INTERCEPTION_SUM'] q['D_HOME_TURNOVER_SUM'+dino]=q['HOME_TURNOVER_SUM'+dino]/q['HOME_TURNOVER_SUM'] q['D_AWAY_TURNOVER_SUM'+dino]=q['AWAY_TURNOVER_SUM'+dino]/q['AWAY_TURNOVER_SUM'] q['D_HOME_TD_SUM'+dino]=q['HOME_TD_SUM'+dino]/q['HOME_TD_SUM'] q['D_AWAY_TD_SUM'+dino]=q['AWAY_TD_SUM'+dino]/q['AWAY_TD_SUM'] q['D_HOME_PENALTY_SUM'+dino]=q['HOME_PENALTY_SUM'+dino]/q['HOME_PENALTY_SUM'] q['D_AWAY_PENALTY_SUM'+dino]=q['AWAY_PENALTY_SUM'+dino]/q['AWAY_PENALTY_SUM'] q['D_HOME_PENALTY_YDS_SUM'+dino]=q['HOME_PENALTY_YDS_SUM'+dino]/q['HOME_PENALTY_YDS_SUM'] q['D_AWAY_PENALTY_YDS_SUM'+dino]=q['AWAY_PENALTY_YDS_SUM'+dino]/q['AWAY_PENALTY_YDS_SUM'] q['D_AWAY_RUSH_YDS_SUM'+dino]=q['AWAY_RUSH_YDS_SUM'+dino]/q['AWAY_RUSH_YDS_SUM'] q['D_HOME_RUSH_YDS_SUM'+dino]=q['HOME_RUSH_YDS_SUM'+dino]/q['HOME_RUSH_YDS_SUM'] q['D_AWAY_RUSH_PLAYS_SUM'+dino]=q['AWAY_RUSH_PLAYS_SUM'+dino]/q['AWAY_RUSH_PLAYS_SUM'] q['D_HOME_RUSH_PLAYS_SUM'+dino]=q['HOME_RUSH_PLAYS_SUM'+dino]/q['HOME_RUSH_PLAYS_SUM'] q['D_HOME_RUSH_YDS_PLAY'+dino]=q['HOME_RUSH_YDS_PLAY'+dino]/q['HOME_RUSH_YDS_PLAY'] q['D_AWAY_RUSH_YDS_PLAY'+dino]=q['AWAY_RUSH_YDS_PLAY'+dino]/q['AWAY_RUSH_YDS_PLAY'] q['D_HOME_4TH_CONV'+dino]=q['HOME_4TH_CONV'+dino]/q['HOME_4TH_CONV'] q['D_HOME_4TH_FAIL'+dino]=q['HOME_4TH_FAIL'+dino]/q['HOME_4TH_FAIL'] q['D_AWAY_4TH_FAIL'+dino]=q['AWAY_4TH_FAIL'+dino]/q['AWAY_4TH_FAIL'] q['D_AWAY_PASS_YDS_SUM'+dino]=q['AWAY_PASS_YDS_SUM'+dino]/q['AWAY_PASS_YDS_SUM'] q['D_HOME_PASS_YDS_SUM'+dino]=q['HOME_PASS_YDS_SUM'+dino]/q['HOME_PASS_YDS_SUM'] q['D_AWAY_PASS_PLAYS_SUM'+dino]=q['AWAY_PASS_PLAYS_SUM'+dino]/q['AWAY_PASS_PLAYS_SUM'] q['D_HOME_PASS_PLAYS_SUM'+dino]=q['HOME_PASS_PLAYS_SUM'+dino]/q['HOME_PASS_PLAYS_SUM'] q['D_HOME_PASS_YDS_PLAY'+dino]=q['HOME_PASS_YDS_PLAY'+dino]/q['HOME_PASS_YDS_PLAY'] q['D_AWAY_PASS_YDS_PLAY'+dino]=q['AWAY_PASS_YDS_PLAY'+dino]/q['AWAY_PASS_YDS_PLAY']
q=q.copy() q=q.sort_values(by=['season','game_date','game_id','play_id','game_seconds_remaining'],ascending=[True,True,True,True,False])
Identify and retain all variables that are known prior to the snap on a fourth down play, as well as the dependent variable. This aspect is crucial. We cannot utilize information obtained after the fourth down action to forecast conversion rates.
greatgazoo=q[['wilma', 'play_id', 'game_id', 'home_team', 'game_seconds_remaining', 'away_team', 'season_type', 'week', 'side_of_field', 'yardline_100', 'game_date', 'posteam', 'game_half', 'drive', 'qtr', 'down', 'time', 'yrdln', 'ydstogo','play_type', 'home_timeouts_remaining', 'away_timeouts_remaining', 'season', 'series', 'fourth_down_failed','fourth_down_converted' ]]
Identify and eliminate all known variables prior to the snap on fourth down, as well as the dependent variable.
mr_slate=q.drop(['play_id', 'game_id', 'home_team', 'game_seconds_remaining', 'away_team', 'season_type', 'week', 'side_of_field', 'yardline_100', 'game_date', 'posteam', 'game_half', 'drive', 'qtr', 'down', 'time', 'yrdln', 'ydstogo','play_type', 'home_timeouts_remaining', 'away_timeouts_remaining', 'season', 'series','rush_attempt', 'pass_attempt','fourth_down_failed', 'fourth_down_converted'], axis=1)
To create a new variable that preserves the original index (wilma) within the dataframe containing the outcomes of the 4th down play.
mr_slate['old_wilma']=mr_slate['wilma']
Increment the index by one, thereby transitioning all third down values to fourth down for any variables that are identifiable after the ball has been snapped.
mr_slate['wilma']=mr_slate['wilma']+1
Merge to the original dataframe. This means that only values known before the ball is snapped will be used to predict 4th down conversion.
z = pd.merge(greatgazoo,mr_slate, how="inner", on=["wilma"])
Select only 4th down plays.
z=z[((z['fourth_down_converted']==1) | (z['fourth_down_failed']==1))] z.shape
The dataset contains a total of 10,609 attempts on fourth down. In addition, we have compiled 551 variables for analysis. Any missing values will be filled in with zeroes.
z=z.fillna(0)
Let's eliminate the conversions of fourth downs that occur due to penalties. Following this adjustment, we will have a total of 10,459 entries in our dataset.
z=z[z['penalty_yards']==0] z.shape
Let's set aside the question of whether a play is a run or a pass. It's important to recognize that since the type of play is determined before the ball is snapped, we can utilize these variables to assess the likelihood of success for both running and passing plays. We’ll delve into that analysis later, so for now, let's omit these categories from our dataset.
z=z.drop(['rush_attempt', 'pass_attempt'], axis=1)
Select the home team from each game.
home=z[z['home_team']==z['posteam']]
Recode from the perspective of the home team. A column that is Home becomes US. A column that is away is them.
home.columns = [col.replace('HOME', 'US') for col in home.columns] home.columns = [col.replace('AWAY', 'THEM') for col in home.columns] home.columns = [col.replace('home', 'US') for col in home.columns] home.columns = [col.replace('away', 'THEM') for col in home.columns]
Select the away team from each game.
away=z[z['home_team']!=z['posteam']] away.head()
away.columns = [col.replace('AWAY', 'US') for col in away.columns] away.columns = [col.replace('HOME', 'THEM') for col in away.columns] away.columns = [col.replace('away', 'US') for col in away.columns] away.columns = [col.replace('home', 'THEM') for col in away.columns]
Combine the two data sets. In this setup, the team that has possession of the ball will be represented by their statistics as "US," while the opposing team, which does not have possession, will be denoted as "THEM" for every fourth down play.
y=pd.concat([home,away]) y=y.sort_values(by=['season','game_date','game_id','play_id','game_seconds_remaining'],ascending=[True,True,True,True,False])
y=y.sort_values(by=['season','game_date','game_id','play_id','game_seconds_remaining'],ascending=[True,True,True,True,False]) y=y.reset_index() y=y.rename(index=str, columns={"index": "BETTY"}) y['BETTY'] = y['BETTY'].astype(str).astype(int) y['BETTY']=y['BETTY']+1
y=y.drop([ 'wilma','old_wilma'], axis=1)
We should remove variables that have historical importance in the context of momentum analysis but lack relevance or usefulness within the predictive framework of our model.
df=y.drop(['US_PENALTY','THEM_PENALTY','US_OFFENSE','THEM_OFFENSE','THEM_RUSH_YARDS','THEM_PASS_YARDS', 'THEM_OFFENSE_YARDS','THEM_PENALTY','THEM_PENALTY_YARDS','THEM_BIG_10','THEM_BIG_20','THEM_NEG_00', 'THEM_NEG_05','THEM_NEG_10','THEM_RUSH_ATTEMPT','THEM_PASS_ATTEMPT','D_THEM_PENALTY_YDS_SUM_5', 'D_THEM_PENALTY_YDS_SUM_10','D_THEM_PENALTY_YDS_SUM_15','D_THEM_PENALTY_YDS_SUM_30','NEGATIVE_YARDS'], axis=1)
df['FOURTH_CONV'] = np.where((df.fourth_down_converted), 1, 0) df['FOURTH_CONV'].value_counts()
Interestingly, the conversion rate for fourth down plays in history stands at approximately 50%.}
To enhance analysis, it would be beneficial to develop a feature that can determine whether the play took place in the first or second half of the game. {It would be advantageous to create a feature capable of identifying if a given play occurred during either the first or second half of the game.
df['first_half']= np.where((df.game_half=='Half2'), 0, 1)
Establish a modeling group that includes data from the seasons between 1999 and 2021. Additionally, create a testing group focused on fourth-down plays from the years 2022 and 2023. The goal of this effort is to build a model using data from 1999 to 2021, which will then be tested against data from the subsequent years, 2022 and 2023.
df['MODELING_GROUP'] = np.where((df.season==2022), 'TESTING', np.where((df.season==2023), 'TESTING', 'TRAINING'))
Count the number of records within each Modeling Group.
df['MODELING_GROUP'].value_counts()
Please refine the dataset further.
df=df.drop(columns =['US_team', 'THEM_team', 'season_type', 'side_of_field', 'game_date', 'game_half', 'time', 'yrdln', 'play_type', 'penalty_team','wilma_min','season','week','fourth_down_failed', 'fourth_down_converted'])
Create a comprehensive list that includes all field names.
x=(df.columns.tolist())
Replace infinite values with a crazy number.
df.replace([np.inf, -np.inf], -1010101, inplace=True)
Establish a pair of separate data frames; one will be used for testing, while the other is meant for training.
df_training=df[df['MODELING_GROUP']=='TRAINING'] df_testing=df[df['MODELING_GROUP']=='TESTING']
Define the features, dependent variable and independent variables.
features = [x for x in df_training.columns if x not in ['FOURTH_CONV','MODELING_GROUP','BETTY','posteam','game_id']] dependent=pd.DataFrame(df_training['FOURTH_CONV']) independent=df_training.drop(columns=['FOURTH_CONV','MODELING_GROUP','BETTY','posteam','game_id'])
estimator_vals = 215 lr_vals =.05 md_vals =10 mcw_vals =5 gamma_vals=.1 mds_vals = 3
First, we will construct the model using the training data and then validate it on the testing data.
xgb0 = XGBClassifier(objective = 'binary:logistic',learning_rate = lr_vals, n_estimators=estimator_vals,max_depth=md_vals,min_child_weight=mcw_vals,gamma=gamma_vals, max_delta_step=mds_vals); xgb0.fit(independent, dependent) df_training=df_training.copy() df_training['P_FOURTH_CONV']= xgb0.predict_proba(df_training[features])[:,1]; df_training=df_training.copy() df_training['Y_FOURTH_CONV'] = np.where(((df_training.P_FOURTH_CONV <= .5)), 0, 1) df_training=df_training.copy() print('Training Data -- 1999 through 2021') print("Accuracy : %.4g" % metrics.accuracy_score(df_training['FOURTH_CONV'].values, df_training['Y_FOURTH_CONV'])) print("AUC Score (Train): %f" % metrics.roc_auc_score(df_training['FOURTH_CONV'], df_training['P_FOURTH_CONV'])) print("Precision Score (Train): %f" % metrics.precision_score(df_training['FOURTH_CONV'], df_training['Y_FOURTH_CONV'])); #print("testing--",estimator_vals,lr_vals ,md_vals,mcw_vals,gamma_vals,mds_vals); df_testing=df_testing.copy() df_testing['P_FOURTH_CONV']= xgb0.predict_proba(df_testing[features])[:,1]; df_testing=df_testing.copy() df_testing['Y_FOURTH_CONV'] = np.where(((df_testing.P_FOURTH_CONV <= .5)), 0, 1) #Print model report: print('Testing Data -- 2022 and 2023') print("Accuracy : %.4g" % metrics.accuracy_score(df_testing['FOURTH_CONV'].values, df_testing['Y_FOURTH_CONV'])) print("AUC Score (Testing): %f" % metrics.roc_auc_score(df_testing['FOURTH_CONV'], df_testing['P_FOURTH_CONV'])) print("Precision Score (Testing): %f" % metrics.precision_score(df_testing['FOURTH_CONV'], df_testing['Y_FOURTH_CONV']))
The accuracy of the testing set stands at an impressive 87.34%. Next, let's delve into a confusion matrix for further insights.
print(pd.crosstab(df_testing.FOURTH_CONV, df_testing.Y_FOURTH_CONV, dropna=False, margins=True))
If the model forecasts a non-conversion, it is incorrect approximately 14% of the time and accurate 86% of the time. When it predicts a conversion, its error rate drops to about 11%, leading to an impressive accuracy rate of 89%. Considering the complexities introduced by human behavior, these results demonstrate remarkable precision. Next, we will explore the key features that contribute to this performance.
df_whole_model=df_testing feature_important = xgb0.get_booster().get_score(importance_type='weight') keys = list(feature_important.keys()) values = list(feature_important.values()) data = pd.DataFrame(data=values, index=keys, columns=["score"]).sort_values(by = "score", ascending=False) data.nlargest(30, columns="score").plot(kind='barh', figsize = (20,10)) ## plot top 40 features
It’s important to highlight the five key factors that significantly influence outcomes: 1) The duration of the current drive, which reflects momentum. 2) The current yard line or field position. 3) The total yards gained over the last ten plays, indicating momentum. 4) The remaining yards needed for a first down. 5) Overall offensive yards accumulated in the last fifteen plays, also related to momentum. Notably, three out of these five critical variables are tied to momentum. While this model is robust, it relies on an extensive array of independent variables—over 500 in total. Is there a way to streamline this number while maintaining accuracy? Let’s explore using just the top thirty most significant variables.
df_whole_model.shape
Identify the top 30 column names that hold the greatest predictive power.
x30=data.head(30).index.to_list()
Create an additional list that includes the identification fields.
keys=['FOURTH_CONV','MODELING_GROUP','BETTY','posteam','game_id']
Combine the lists.
x30=keys+x30
Select the appropriate categories from both the training and testing datasets.
df_training=df_training[x30] df_testing=df_testing[x30]
Construct a model utilizing the streamlined selection of independent variables.
features = [x for x in df_training.columns if x not in ['FOURTH_CONV','MODELING_GROUP','BETTY','posteam','game_id']] dependent=pd.DataFrame(df_training['FOURTH_CONV']) independent=df_training.drop(columns=['FOURTH_CONV','MODELING_GROUP','BETTY','posteam','game_id']) xgb0 = XGBClassifier(objective = 'binary:logistic',learning_rate = lr_vals, n_estimators=estimator_vals,max_depth=md_vals,min_child_weight=mcw_vals,gamma=gamma_vals, max_delta_step=mds_vals); xgb0.fit(independent, dependent) df_training=df_training.copy() df_training['P_FOURTH_CONV']= xgb0.predict_proba(df_training[features])[:,1]; df_training=df_training.copy() df_training['Y_FOURTH_CONV'] = np.where(((df_training.P_FOURTH_CONV <= .5)), 0, 1) df_training=df_training.copy() print('Training Data -- 1999 through 2021') print("Accuracy : %.4g" % metrics.accuracy_score(df_training['FOURTH_CONV'].values, df_training['Y_FOURTH_CONV'])) print("AUC Score (Train): %f" % metrics.roc_auc_score(df_training['FOURTH_CONV'], df_training['P_FOURTH_CONV'])) print("Precision Score (Train): %f" % metrics.precision_score(df_training['FOURTH_CONV'], df_training['Y_FOURTH_CONV'])); df_testing=df_testing.copy() df_testing['P_FOURTH_CONV']= xgb0.predict_proba(df_testing[features])[:,1]; df_testing=df_testing.copy() df_testing['Y_FOURTH_CONV'] = np.where(((df_testing.P_FOURTH_CONV <= .5)), 0, 1) print('Testing Data -- 2022 and 2023') print("Accuracy : %.4g" % metrics.accuracy_score(df_testing['FOURTH_CONV'].values, df_testing['Y_FOURTH_CONV'])) print("AUC Score (Testing): %f" % metrics.roc_auc_score(df_testing['FOURTH_CONV'], df_testing['P_FOURTH_CONV'])) print("Precision Score (Testing): %f" % metrics.precision_score(df_testing['FOURTH_CONV'], df_testing['Y_FOURTH_CONV']))
We can get similar results by only using 30 independent variables. Let's look at a specific example: the Lions' failed fourth-down attempt against the 49ers in the NFC Championship game.
df=df_testing[df_testing['BETTY']==615686] df=df[['FOURTH_CONV','BETTY','game_id','ydstogo','P_FOURTH_CONV','Y_FOURTH_CONV']] df
Our analysis shows that prior to the ball being snapped, the likelihood of a successful conversion was approximately 20%. This means there was an 80% chance that the Lions would fail to convert on fourth down. Now, let’s examine the teams in 2023 that took the least amount of risk by opting for first downs with the highest probability of success.
df_testing['year'] = df_testing['game_id'].str.slice(0, 4) df_testing=df_testing[df_testing['year']=='2023'] grouped_df = pd.DataFrame(df_testing.groupby('posteam')['P_FOURTH_CONV'].agg(['mean', 'min','max','count']).reset_index()) grouped_df=grouped_df.sort_values(by=['mean'],ascending=[False]) grouped_df.head(5)
According to the analysis, teams like Denver, Tampa Bay, Philadelphia, Minnesota, and Baltimore exhibited the least risk-taking behavior on fourth down in 2023. This suggests that the coaches of these franchises possess a solid understanding of the model's insights. They are more inclined to attempt a fourth down conversion when the odds are in their favor. In essence, these coaching staffs demonstrate an ability to make optimal decisions regarding fourth down situations. Now, let’s examine those teams that take greater risks.
grouped_df.tail(5)
Contextual Analysis of 4th Down Conversion Decisions
Insights into the specific game situations in which these bottom-ranked teams attempted low-probability 4th down conversions would provide valuable context for understanding their decision-making process and identifying potential areas for improvement. An analysis of the frequency and success of 4th down conversions in different game scenarios (e.g., early in the game vs. late in the game, winning vs. losing, etc.) would offer a more nuanced understanding of the factors influencing conversion rates and the effectiveness of coaching decisions.To simplify matters, we'll create a ratio by dividing each team's actual conversion rate by their average probability of conversion. A ratio of 1 means that a team converts on 4th down as much as expected. A ratio greater than one implies the team converts more than expected, while a ratio less than one indicates lower-than-expected conversions.
YY=df_testing tips_summed = pd.DataFrame(YY.groupby(['posteam'])[['P_FOURTH_CONV', 'FOURTH_CONV']].mean()) tips_summed['INDEX']=tips_summed['FOURTH_CONV']/tips_summed['P_FOURTH_CONV'] tips_summed=tips_summed.sort_values(by=['INDEX'], ascending=[True]) plotter=tips_summed plotter=plotter.reset_index() plotter
In the 2023 season, Dallas struggled the most with converting fourth downs, while Baltimore excelled in this area. The accompanying chart illustrates the correlation between the predicted probabilities of success on fourth-down attempts and their actual conversion rates. The strong linear relationship observed here suggests that our model is highly effective. Furthermore, it reveals that teams are more successful in their fourth-down conversions when they choose to take risks under favorable conditions.
y1 = plotter['FOURTH_CONV'] x1 = plotter['P_FOURTH_CONV'] labels=plotter['posteam'] trace = go.Scatter( text=labels, textposition='bottom left', x = x1, y = y1, mode='markers+text', name='Net Yards gained on the Durrent Drive') layout = go.Layout( title='Probability of 4th Down Conversion and Actual Conversion Rate on 4th Down', xaxis=dict( title='Probability to Convert on Fourth Down', titlefont=dict( family='Courier New, monospace', size=18, color='#7f7f7f' ) ), yaxis=dict( title='Fourth Down Conversion Rate', titlefont=dict( family='Courier New, monospace', size=18, color='#7f7f7f' ) ), showlegend=False, ) # Compute linear trend line slope, intercept = np.polyfit(x1, y1, 1) trendline_y = slope * x1 + intercept # Trend line plot trendline = go.Scatter( x=x1, y=trendline_y, mode='lines', name='Trend line' ) data=[trace,trendline] fig = go.Figure(data=data, layout=layout) #plot_url = py.plot(fig, filename='styling-names') plotly.offline.iplot(fig, filename='lines')
Assessing Fourth-Down Conversion Strategies in the NFL
Some NFL teams, such as Jacksonville, Tennessee, and the New York Jets, often struggle with their decision-making regarding fourth-down conversions. These teams tend to be overly aggressive at times, opting to go for it in situations where statistical odds do not favor them. In contrast, franchises like Baltimore and Philadelphia have shown a more nuanced understanding of when to attempt these crucial plays. With average fourth-down conversion rates hovering around 40%, it becomes clear that those who excel in this area possess both strategic insight and quarterbacks capable of executing high-pressure plays effectively.Las Vegas has its own intriguing aspects. While their decision-making on when to take a risk on fourth down could use some improvement, they excel at converting those opportunities. The model presented in this notebook can be easily implemented and utilized in real time. There’s no justification for any coach to make poor choices regarding fourth downs. The essential factor lies in grasping the game’s momentum. If you understand this dynamic, determining the right moment to "go for it" on fourth down becomes clear—it’s simply about playing the odds. I hope you found this information useful. Thank you for your attention.
References
Complete Guide to Sports Data Analytics
Sports data analytics is the process of analyzing, interpreting, and leveraging vast amounts of data generated from sports ...
Source: KINEXON SportsSports Analytics: How Different Sports Use Data Analytics
Sports analytics involves collecting and analyzing relevant historical statistics that can provide a competitive edge to a team or individual. With more ...
Source: DataCampWhy Sports Analytics is Essential for Victory Today
Teams use complicated analytics to evaluate and increase their performance by using data signals, video footage, forecasted goals and assists, ...
Source: Express AnalyticsSports Performance Analytics Specialization [5 courses] (UMich)
Offered by University of Michigan. Predictive Sports Analytics with Real Sports Data. Anticipate player and team performance using ...
Source: CourseraSports analytics
Sports analytics are collections of relevant historical statistics that can provide a competitive advantage to a team or individual by helping to inform ...
Source: WikipediaMSc Sport Data Analytics - Courses
The MSc Sport Data Analytics at Strathclyde is unique in: placing a high emphasis on analytics, providing you with the technical skills to analyse complex data ...
Source: University of StrathclydeAll You Need to Know About Sport Analytics in 2024
Sports analytics data helps sports organizations understand their fans better. By analyzing fan behavior, preferences, and engagement patterns, ...
Source: Analytics VidhyaWhat Is Sports Analytics? (Definition, Importance, and Tips)
Sport media companies use sports analytics to enhance reporting of sports activities, involve fans, and increase the entertainment value offered ...
Source: Indeed
Discussions