Showing posts with label Moneyball. Show all posts
Showing posts with label Moneyball. Show all posts

Tuesday, December 20, 2022

MLB Wins over Expected Wins pre and post adoption of Moneyball by Charles Slavik



https://public.tableau.com/views/MLB_WINS/Sheet1?:language=en-US&publish=yes&:toolbar=n&:display_count=n&:origin=viz_share_link



Thursday, June 07, 2018

Giants Hitting Stats at a glance



Longoria OPS is .738, barely above Panik at .729. I guess I never realized what a windmill he was. He swings at everything, anything close. McCutchen at .761 is close. He doesn't expand the strike zone that much, if at all. The umpires do a pretty good job at expanding it for Andrew. 

Longoria reportedly got a bit touchy when a reporter brought up how his OBA is almost equal to his AVG, ie: he doesn't walk much and he said "Who cares about walks?" I guess he missed the whole Moneyball, Brad Pitt bro-mance that baseball has had with OBA and walks recently. Or perhaps he's more a "launch angle" guy. 

Austin Jackson OPS @ 0.622 Yuck!!

from thescore.com (app link)
https://thescore.app.link/ntEzyCn6pA

Wednesday, January 31, 2018

Take Me Out to the Brain Game - SBNation.com


The go-to phrase from Moneyball. Dyktra was going to "stick" future Hall of Famer Steve Carlton in a spring training game, when all the other rookies had stars in their eyes. Classic.

Take Me Out to the Brain Game - SBNation.com:
“Lenny didn’t let his mind screw him up. The physical gifts required to play pro ball were, in some ways, less extraordinary than the mental ones. Only a psychological freak could approach a 100-mph fastball aimed not far from his head with total confidence.”
- Michael Lewis, Moneyball
'via Blog this'

Saturday, December 16, 2017

The Moneyball Mistake - The Danger of Managing to the Metric

Image result for managing to the metric

It's a long way to go, but well worth the trip. - CS

http://news.wustl.edu/news/Pages/27028.aspx

Incentive gaming is when people manipulate pay-for-performance schemes in ways that increase their compensation without benefitting the party that pays.


“It’s an example of how innovative people can be when there are financial rewards involved,” Pierce said.
What top officials at the VA may have overlooked, however, was a test of their system before implementing it.
“Designers of incentive-based compensation systems must think carefully about unintended consequences, putting themselves in the shoes of their employees and asking, ‘If I were given these incentives, what might I do to game them?’ ”
“Managers and policymakers need to understand that humans are clever and often opportunistic,” Pierce said. “If you give them an incentive system, many of them will figure out how to manipulate it to maximize pay and minimize effort.”
Pierce noted that the allegations involving the VA, as in frequent cases of standardized test gaming by teachers, are particularly striking for two reasons.
First, it shows the difficulty of inserting financial incentives into a setting where they are not traditionally used — the federal government.
“Many people erroneously see financial incentives as a panacea for perceived examples of government inefficiency such as education, health care or procurement,” he said.
Second, it shows that financial incentives can overwhelm even values that often are represented as routine.
“If even small financial incentives can overwhelm strong societal values such as educating children and caring for those who served our country, then such problems can occur anywhere,” Pierce said. 


The Dangers of Managing to Metrics | BOMGAR

Strategy Topics


Support centers run their business and make critical decisions based on metrics.  However, often times the metrics are not giving leadership a true representation of the state of the business due to both unbalanced and self-reported (biased) metrics.
Whether or not you have kids in school, there's a good chance you've heard plenty of arguments against the practice of "teaching to the test," where teachers are forced to focus on preparing students for standardized tests, rather than teaching a well-rounded curriculum for the grade level and subject matter.  Higher scores on the standardized tests lead to better teacher ratings and school rankings, but do not necessarily promote the entire education experience required for the student to be successful at the next level.  Imagine if you would, a football coach who only practices punt returns or a basketball coach only focused on free throws.  The team would be great at these single tasks, but would they ever be able to actually win a game? 
It may not be seen as "teaching to the test," but the same practice occurs in many support centers around the globe when management does not use a balanced scorecard. Instead, agents are encouraged to focus on just one or two metrics, such as Average Handle Time (AHT), Customer Satisfaction (CSAT), Quality, or Issue Resolution (IR).  The agents quickly learn to do well based on the metrics they are being measured against and neglect the rest of what is important to both the business and the customer.  Below are a few examples I've seen.
  • Focus on AHT - A particular computer tech support call center I worked with had dissatisfied customers because of long wait times to reach an agent.  Management realized that agent call handle times were extremely long, so they shifted focus from quality to AHT.  The agents were given shorter talk time targets, which were reported weekly.  The agents knew their performance evaluations would be based on their reported AHT, so their solution was to "fix" the majority of issues by telling the customers to reinstall the operating system and call back if they needed help after the reinstall was complete.  Most of these issues, it turned out, did not need an OS reinstall. This caused a lot of unnecessary churn for the customers (not to mention data loss), simply because the agents did not want to take the necessary time to troubleshoot the true issue.  In the end, handle times were great - but to the detriment of repeat calls, issue resolution and customer satisfaction - costing the company more money than they were saving by shortening handle times.
  • Focus on CSAT - Customer loyalty is very important to the long term success of any business, as dissatisfied customers will not only share their negative experience with others, but also not be repeat customers themselves.  However, singular focus on CSAT can easily drive up costs unnecessarily.  One customer service call center was seeing an increase in negative customer satisfaction surveys.  In response, the primary goal became raising CSAT scores.  As a result, agents began abusing concessions available to them to pacify customers.  They weren't resolving the issues or reason for complaint at a higher rate. They were simply giving customers more free products and coupons for future purchases, while specifically telling them to be sure to give favorable scores on the email survey they would receive.  Not only did this behavior cause higher costs in concessions, but it also drove up repeat calls once the customers realized their issues were not really resolved over time.
Equally as dangerous in managing to metrics is the practice of self-reported metrics, which is the equivalent of the fox guarding the hen house.  Businesses need to have methods for ensuring efficiency and effectiveness, but it is possible for departments or groups to "game the system" in order to ensure good scores for rankings, ratings and even compensation.  Below are two examples of the negative impact of self-reported metrics.
  • Quality Monitoring - Team managers in support centers are rated by the quality of their agents.  Some support centers have a quality team, but many times the team managers are responsible for evaluating and coaching their own agents.  Low scores not only cause extra work for the manager by forcing them to coach the poor performers but also force managers to report the poor quality of their own team.  This can lead to falsely elevated quality scores, which can ultimately drives lower CSAT and issue resolution, as well as higher handle times and repeat calls when undesirable agent behaviors are not corrected or improved.  If agents are given false high quality scores, they will continue performing at the same level.  It is important to ensure that quality and customer satisfaction scores are correlated by coaching and managing to appropriate levels.
  • Issue Resolution - Often times IR is measured by agents reporting whether or not they believe the issue was resolved - not if the issue was actually resolved from the customer's perspective.  I've seen many instances of agents claiming that the issue was resolved when it was out of their scope of support and they were unable to resolve it, claiming that they should not be scored against something out of their control.  Again, the issue was not resolved and in the end will drive other metrics to be out of balance.
Design of a balanced scorecard ultimately is about the identification of a small number of financial and non-financial measures and attaching targets to them1.  This should include the measures and targets for both inputs (i.e. contact volume, staffing levels) and outputs (service levels, abandon rates, issue resolution).  Focus on balanced metrics can help a business quickly see where further analysis and improvements are need.
In order to collect unbiased feedback, care should be taken to ensure that everyone in the population being surveyed has an equal chance of being selected.  For example, only surveying customers whose issues have been resolved eliminates feedback from customers with unresolved issues, missing opportunities for improvement.  Also, when monitoring agent quality, sampling should occur at different times of the day.  Agent behaviors can vary throughout the day, so using random times of the day can net varying opportunities for coaching and feedback.
Ultimately, a balanced score card measured by unbiased methods is critical to ensure a true picture of how the business and individual teams are performing. Without focus on both, agents are driving to undesirable performance and the business could be making unfavorable decisions.

GAMING INCENTIVES

September 16th, 2010
A couple of weeks ago, I listened to a very funny story about economic incentives on NPR. (Something funny on economic incentives?!)
The story was about an economics professor who decided to use incentives to shape the behavior of his children. He devised an incentive program for potty training–which his toddler gamed.
So when the time came to potty train his daughter, B., he designed what seemed like an economically rational incentive: B. would receive a jelly bean every time she went to the toilet.
Once the new policy was in place, B. suddenly had to go to the toilet really, really often.
A few years later, B.'s younger brother needed to be potty trained. And Gans decided to expand the incentive system: Every time B. helped her brother go to the bathroom, she would get a treat.
"I realized that the more that goes in, the more comes out," says B., who is now 11. "So I was just feeding my brother buckets and buckets of water."
I see lots of incentive programs like that–so obvious they could be gamed by a child. What are the people who devise these programs thinking? Sadly, when the simple incentives produce undesired results, they come up with ever more convoluted ways to elicit the desired behavior. (There's also a tendency to blame the people who gamed the incentive.  Its as if the logic is: "We tried to manipulate you but you figured it out. Now we think you are a bad person for not cooperating with our attempt to manipulate you.")
Rather than try to manipulate adults in the workplace, why not appeal to intrinsic motivation? Tell people why something you want them to do is important, how it connects to the mission and financial results of the company. Then remove disincentives and barriers to doing the right thing.
Anyone contemplating trying to shape behavior with measurement and incentives should read Austin's Measuring and Managing Performance in Organizations.  And then, consider that there might be a more congruent way to achieve the desired outcome.
- See more at: Gaming Incentives


The Problem with Popular Measures

The most useful statistics are persistent (they show that the outcome of an action at one time will be similar to the outcome of the same action at another time) and predictive (they link cause and effect, predicting the outcome being measured). Statisticians assess a measure's persistence and its predictive value by examining the coefficient of correlation: the degree of the linear relationship between variables in a pair of distributions. Put simply, if there is a strong relationship between two sets of variables (say a group of companies' sales growth in two different periods), plotting the points on a graph like the ones shown here produces a straight line. If there's no relationship between the variables, the points will appear to be randomly scattered, in this case showing that sales growth in the first period does not predict sales growth in the second.
In comparing the variable "sales growth" in two periods, the coefficient of correlation, r, falls in the range of 1.00 to –1.00. If each company's sales growth is the same in both periods (a perfect positive correlation), r = 1.00—a straight line. (The values need not be equal to produce a perfect correlation; any straight line will do.) If sales growth in the two periods is unrelated (there is zero correlation), r = 0—a random pattern. If increases in one period match decreases in the other (a perfect inverse correlation), r = –1.00—also a straight line. Even a quick glance can tell you whether there is a high correlation between the variables (the points are tightly clustered and linear) or a low correlation (they're randomly scattered).
The closer to 1.00 or –1.00 the coefficient of correlation is, the more persistent and predictive the statistic. The closer to zero, the less persistent and predictive the statistic.
Let's examine the persistence of two popular measures: EPS growth and sales growth.
The figures above show the coefficient of correlation for EPS growth and sales growth for more than 300 large nonfinancial companies in the United States. The compounded annual growth rates from 2005 to 2007, on the horizontal axes, are compared with the rates from 2008 to 2010, on the vertical axes. If EPS and sales growth were highly persistent and, therefore, dependent on factors the company could control, the points would cluster tightly on a straight line. But in fact they're widely scattered, revealing the important role of chance or luck. The correlation is negative and relatively weak (r = –0.13) for EPS growth but somewhat higher (r = 0.28) for sales growth. This is consistent with the results of large-scale studies.
Next, we'll look at the predictive value of EPS growth and sales growth by examining the correlation of each with shareholder returns.
In the figures above, adjusted EPS growth and sales growth are on the horizontal axes. The vertical axes are the total return to shareholders for each company's stock less the total return for the S&P 500. Adjusted EPS growth shows a reasonably good correlation with increasing shareholder value (r = 0.37), so it is somewhat predictive. The problem is that forecasting earnings is difficult because, as we saw in the previous analysis, EPS growth in one period tells you little about what will happen in another. Earnings data may be moderately predictive of shareholder returns, but they are not persistent.
Using sales growth as a gauge of value creation falls short for a different reason. While sales growth is more persistent than EPS growth, it is less strongly correlated with relative total returns to shareholders (r = 0.27). In other words, sales-growth statistics may be somewhat persistent, but they're not very predictive.
Thus the two most popular measures of performance have limited value in predicting shareholder returns because neither is both persistent and predictive.

Image result for managing to the metric

~;::::::;( )">  ¯\_( )_/¯

Wednesday, December 16, 2015

Baseball America and MLB.com revised prospect lists

Image result for giants prospect lists

Both Baseball America and MLB.com revised their prospect rankings recently and published the results. One thing in particular stands out about the Giants prospect list. It is fluid.

We cried when the G-men drafted Christian Arroyo, but now apparently he is the fair-haired boy, the successor to Duffy and Panik, nay Kelby Tomlinson!!

And now we're crying about the demise, the slide down the list, that is Kyle Crick. Once known as Matt Cain 2.0, of course, now we have an actual Matt Cain 2.0, who actually goes by the name of Matt Cain  and carries with him a 2.0 version of his right elbow. You know, the one he pitches with and deposits $20M annual paychecks with.

 A lot of crying and angst looking at these lists, I will say that. But somebody has to do it.

The first group is the consensus group, the most dangerous group to be in because expectations are elevated and there is theoretically only one place to go, DOWN. In reality, the Panik, Duffy, Susac, Tomlinson glide path is more preferred, there's just only so many rooms at the inn, so to speak.

The Consensus Four:
Tyler Beede, RHP
Christian Arroyo, SS
Adalberto Mejia, LHP
Clayton Blackburn, RHP

Of this group, Blackburn is not that highly regarded by most, he just gets guys out and makes the stat guys take notice and the "jeans sellers" (Moneyball reference) throw up in their mouths.

Mejia screams trade-bait to me. Arroyo and Beede would be keepers, but if Car-go (Rockies reference) is available, I'd package two from this list right now.

The second-tier consensus:
These guys make a lot of lists but have a lot of ??'s attached to them

Kyle Crick, RHP
Steven Okert, LHP
Aramis Garcia, C
Ty Blach, LHP

Crick and Okert both have great stuff they just don't know where it's going. That gets coaches fired. Okert compounds it with an ever-growing medical file. He is left-handed though, so it pays to be patient. Blach is 25, so it's show or go time. He has to show what he has or go somewhere else. He is left-handed though so.....see above.  

The 25'ers:
Twenty-five years old, still playing in the minors, do I have to spell it out? Show time or go time.

Mac Williamson, OF
Chris Stratton, RHP
Derek Law, RHP

Mac Williamson has five-tool talent written all over him and looks great getting off the bus. Which begs the question.....Why is he not in SF? Better find out what he has this year. Same with Jarrett Parker, who's older BTW.

Stratton really hasn't sown much since they drafted him high, Derek Law has probably shown more, but is battling his medical file as well.


The Young'uns:
Everybody loves these guys. The prospect-sphere sure likes 'em young. That even sounded dirty writing it, but it's a known fact.

Phil Bickford, RHP
Lucius Fox, SS
Sam Coonrod, RHP
Chris Shaw, 1B
Jalen Miller, SS
Andrew Suarez, LHP
Mac Marshall, LHP

Some of these guys, like Coonrod, aren't necessarily young, just not seasoned professionals. Coonrod lights up the stats. He may be a tweener as far as starter or reliever goes, but the stuff is pretty compelling at this point. Good mix of pitching and hitting. A couple of these guys could fill out a prospect-laden package for a veteran LF'er. The Giants loaded up the cart last year, this year they are already down the first-rounder for signing Samardzija. A trade empties the cart a little bit, but for the right guy......who knows?

Anything is possible and as always, FLUID.

At this point, as far as the statistical indicators go, the only prospects with more than a small-sample size that I would put my money on would be Blackburn, Coonrod and Derek Law (with a clean bill of health) on the pitching side. Crick, Stratton and maybe Okert have to turn things around quickly.

On the hitting side, I think Williamson will hit given the opportunity and a clean bill of health. Arroyo is getting there, I am anxious to see how he handles AA for an extended amount of AB's. Slater, Cole and Dylan Davis (college bats) need to start advancing and handling AA-AAA pitching in the next year or so, but could still be dark-horses in the Duffy/Tomlinson mold.

It's also a pivotal year for former HS SS Ryder Jones lest he go the route of Rafael Rodriguez and Chuckie Jones, ie: the "whatever became of" route. Stephen Duggar (OF from Clemson) has to prove he is not Gary Brown 2.0, ie: a toolsy OF who doesn't produce up to the potential scouts place on his athletic gifts.

Giants love them some toolsy OF, however their prospect graveyard is littered with many that have an eptitath that begins with "Whatever became of....?" That needs to change soon. We need OF versions of Matt Duffy and Kelby Tomlinsom.

Speaking of which, whatever happened to Daniel Carbonell?

Sunday, October 25, 2015

Tweet by Simon Nainby on Twitter


Ouch!! That hurts. So more doesn't always equal better? Like you can have plenty of knowledge, yet not enough wisdom, which is the application of knowledge?

Preaching to the choir. This exemplifies the problem of relying blindly/too much on a statistical model per se, without paying equal attention to how you are going to use those statistics, that knowledge.

Simon Nainby (@SiNainby)
"More data such as paying attention to eye colors of people when crossing a street can make you miss the big truck." pic.twitter.com/MtVSByxXUT
Download the Twitter app
Sent from my iPhone

The Dodgers may have lost a game, a series, a shot at a World Series berth and a manager all because a SS didn't know he had the cover of 3B when the 3B was shifted into RF. You can say that is not a failure of the statistical model all you want, but in a reach to maybe shave what .005 or .010 point off the opponents average, to say nothing of being slavish adherents to a "follow the cool trend or get left behind" the Dodgers came up big losers. And that is with having a big check book to back up the Moneyball approach which seems like an oxymoron. But that's modern day baseball.

Let;s see if the Mets and the Matt Harvey saga come up to bite them on the butt when they least expect it for similar underlying reasons.

Oh yeah, and the A's finished in last place.




Wednesday, July 08, 2015

Moneyball 2.0? I hope not, I'm still not over MB 1.0

THE CANADIAN PRESS/Nathan Denette


Key point from the article:
The caveat with QoP is that while the numbers it generates are objective, there was subjectivity involved in its development. The weights assigned to the various factors were honed over a number of years, but there’s a central question as to whether a cutting fastball can truly be compared to a sharp curve or a well-placed change up. (The system also doesn’t allow for using one pitch to set up another, which is a key part of the art.)
This goes to the heart of why Moneyball and the whole stats vs. scouts debate continues to reverberate every dugout and back office in baseball. It is a power struggle between whose data to use. The "subjectivity" of scouting versus the "objectivity" of statistics. I think we can see that the lines are not so much black and white as it is shades of grey regardless of which direction you go.

from the National Post:
Moneyball 2.0? New pitching stat — courtesy of a couple of guys from Edmonton — could help identify hidden talent | National Post:

Moneyball 2.0? New pitching stat — courtesy of a couple of guys from Edmonton — could help identify hidden talent

Scott StinsonMay 29, 2015 2:20 PM ET

Quality of Pitch

· Aims to put a numerical value on each pitch, regardless of type, on a scale of -10 to 10
· Pitches are graded according to velocity, location, amount of break, time of break, and rise out of the pitcher’s hand
· The vast majority of pitches fall in the 3-6 range. Over several years of data, only a handful of pitches with a 10 rating have been observed
· Because many pitches in an at-bat are average or worse, most pitchers will have an average quality of pitch (QoPa) of less than 5. The best team, by QoPa, in 2014 was Miami at 4.75. Toronto was near the bottom at 4.43
· At its best it identifies players who are throwing good pitches but not getting the deserved result. Michael Wacha of the Cardinals had an excellent QoPa — in MLB’s top ten — in 2014 and had a good ERA of 3.20. This season, with similar-quality pitches, his ERA has dropped below 2.00.

Scott Stinson, Postmedia News

TORONTO — It has been 12 years since Moneyball was published, and 13 years since the first playoff appearances of the Oakland A’s team that it documented. That is to say, on-base percentage isn’t sneaking up on anyone any longer.

The things that Billy Beane championed with the A’s — the value of OBP and slugging percentage when evaluating prospects, and a decreased reliance on traditional indicators such as speed and contact — have long since been accepted by enough by people in the game that the original Moneyball conceit has largely been neutralized. That development poses a challenge for teams trying to find a statistical edge to complement their scouting: The central tenet of the Beane way of thinking, identifying the market inefficiency and then exploiting it, demands that there is still something left to exploit.

A couple of guys from Edmonton think they have just the thing: pitch quantification. Here is Wayne Greiner, chief salesman for the metric they call Quality of Pitch, or QoP, with the bold statement: “We think QoP is eventually going to carry more weight than ERA.”

As I say: Bold. The statistic has its roots in the college baseball career of Jarvis Greiner, Wayne’s son, who pitched at Biola University in Southern California before an injury put an end to that. Working with one of his professors at Biola, Greiner set out to try to grade the quality of a pitch in a way that had never been done before. We know that a fastball that travels 96 miles per hour is better than one that travels 87 mph, and one that paints the corner of the plate is better than one that crosses its middle. The vast amount of data now provided by Major League Baseball’s PITCHf/x system can say how much a curveball breaks and a sinker sinks, and when it breaks. Quality of Pitch attempts to take all of that information and boil it down to a single number that says whether a pitch was good or bad. A perfect pitch rates a 10. Anything above 5.0 is considered above average. And the allure of that single, simple number is that it can be assessed on any pitch: fastball, curveball, slider, changeup. Every pitch is graded on five factors: velocity, location, amount of break, point of break, and rise out of the pitcher’s hand. A pitch that breaks late and is on the edge of the strike zone will score better than one with little movement and that misses the plate entirely, and other such things you can probably figure out for yourself.

The practical application for the metric — the market inefficiency that it could potentially exploit — is that it’s a more pure assessment of the things a pitcher can control while stripping out the things he cannot. Greiner explains the concept this way: a pitcher is consistently throwing well, but a batter manages to fight off an inside pitch and bloops a single. The next guy is walked on a borderline call. The pitcher is unfazed and starts the next at-bat with more quality pitches, but then he hangs a curve ball that is turned into a three-run homer. This is bad for all of his normal statistics, including earned-run average, but QoP would say it was actually a pretty good stretch. Conversely, a pitcher who is not making quality pitches but is bailed out by a handful of great defensive plays behind him would have his mediocre outing reflected in the QoP numbers, if not the traditional statistics.

Related

The tantalizing prospect of QoP is its potential ability to tell teams which pitchers are consistently throwing better than their top-line numbers indicate. Like the guys who were quietly posting high on-base percentages a decade ago, that is the hidden value that QoP could unlock. Greiner says the numbers from 2014 predicted that, for example, Minnesota’s Kyle Gibson pitched better than his numbers indicated. He was in the top ten with an average QoP of 5.46, but had a middling ERA of 4.47. This season, again throwing quality pitches, his ERA is 2.72. (One of the things about developing algorithms over a number of years is that the makers don’t want to share all the data just yet. Since Quality of Pitch was first presented at a sabermetrics conference two months ago, nine Major League teams have taken an interest in the data. “I think we can get all 30,” Greiner says.)

The information would also be of use to teams trying to assess their own pitchers, like an early-warning signal for when someone’s curveball, for example, suddenly becomes flat. Greiner even says that the numbers can foretell arm trouble: if a pitcher’s QoP metrics suddenly go squirrely over a number of appearances, there’s a good chance that he’s not feeling right.
THE CANADIAN PRESS/Nathan DenetteThe tantalizing prospect of QoP is its potential ability to tell teams which pitchers are consistently throwing better than their top-line numbers indicate.

The caveat with QoP is that while the numbers it generates are objective, there was subjectivity involved in its development. The weights assigned to the various factors were honed over a number of years, but there’s a central question as to whether a cutting fastball can truly be compared to a sharp curve or a well-placed change up. (The system also doesn’t allow for using one pitch to set up another, which is a key part of the art.)

But Greiner believes the models have been tuned enough to now generate reliable data, pitch after pitch. It’s not Moneyball 2.0 yet, but it might get there.
Postmedia News
'via Blog this'

Wednesday, July 01, 2015

SABR Geeks, Stats and Playing to the Metric

Lucroyframe

At times, SABR guys do act like they invented the skill catcher framing ( and other aspects of baseball ) because they can now somehow quantify it or illustrate it via charts, graphs or some other whiz-bang technology.  

Is there a baseball coach in America that doesn't think framing is important? 10 year olds are framing FCOL!! Bad coaches are the only ones it seems who do not understand its importance and unfortunately just like the poor, bad coaches will always be among us.  

Pitchers do, catchers do, umpires do, pitching coaches do, even hitters do, and have for a long time. Long before sabermetrics and data analysis was a gleam in the eye of some wanna-be GM. It's SABR arrogance and self indulgence at its worst as the column below titled Sabermetrics Suck: I am not a Troll  humorously illustrates. 

from SABR:
http://sabr.org/latest/lindbergh-brandon-mccarthy-value-catcher-framing
From SABR member Ben Lindbergh at Baseball Prospectus on May 20, 2013:
Diamondbacks starter Brandon McCarthy is known as one of baseball’s most thoughtful, analytical pitchers; two years ago, he famously embraced advanced statistics and remade himself as a pitcher by perfecting a two-seamer that helped him get groundballs more often. As a result, he’s pretty popular on the internet. I asked him to provide the pitcher’s perspective on the importance of pitch framing and receiving skills.
On how he likes to see a catcher receive his pitches: “You keep the ball where you’re throwing, but it just feels soft. Like you’re just throwing to something that just—as a pitcher, you can see movement, see stabbing, the head is moving a lot, there’s a lot of movement. You know that the umpire can see that. And if the umpire is reacting to that, then you’re probably losing pitches. There isn’t much of that with [Miguel Montero], it’s soft and it’s kind of comfortable receiving as opposed to some catchers it looks like they’re—not scared of the ball, but they’re just very anxious to go get it. And it seems like with them you see more pitches being taken away from them.”
On what a good receiver is worth: "I don’t want to put a concrete number on it, because that’s what people take away from it, and you can kind of become married to that. But I would say it’s pretty worthwhile. I mean, the difference between being in a 1-1 count and a 1-2 count is big. Sometimes you might have two of those situations in a game or three, and sometimes you might have 10 or 11, and if he’s doing something for you that’s earning calls that you might not usually get… You know, it’s hard to say because it’s not really an easy situation, you don’t know if somebody else would have gotten that call, or if it’s the umpire, or if it’s him, but I would say over the course of a season it’s probably worth a lot more than most people would consider.”
from Baseball Prospectus:
http://www.baseballprospectus.com/article.php?articleid=18896

Later that day, Rays manager Joe Maddon went on 620 WDAE-AM in Tampa with co-hosts Ron Diaz and Ian Beckles, and he and Beckles had this exchange:
Beckles: Hey Joe, a lot of the moves you make throughout the season are going to be questioned, and it doesn’t matter to you—most of them work out. The one, I guess, move that gets questioned more than any others is Jose Molina, as much as he played this year. Explain to us what Jose Molina has, or what he offers, that either [Chris] Gimenez or [Jose] Lobaton doesn’t offer.
Maddon: Well, I could reveal to you a stat that I just got today that I think would really blow some people’s minds up. I don’t know exactly how it’s calculated or formulated, but it was concluded that he saved us 50 runs this year. And that’s highly significant. You could break down—you know, people just notice once well, maybe he does not block a baseball. I agree with that, although when he has to, he has blocked the ball well. Early in the season, he was not throwing well, but by the end of the year, he was one of the best throwers in the American League. Also by the end of the year, he started hitting the ball and impacting it a lot better. But we did not—whatever we get from his bat was always going to be a bonus. It was primarily based on defense. So if you get a catcher that’s saving you 50 runs on an annual basis, that is highly significant. So, again, without—I don’t have all the information in front of me, but that’s a highly significant number. So, at the end of the day, people are going to look at the superficial part of all this, but we can’t do that. We do have to look under the hood, and actually, Jose was very, very prominent in our success this year.
We don’t know for sure whether Maddon was referring to Max’s calculations. The timing certainly suggests that he was, but maybe there’s another explanation–after all, October 5th was two days after the season ended, which is about when Maddon might have received the Rays’ internal end-of-season reports. Maybe Max’ numbers matched up with the Rays’ own evaluations exactly, or closely enough that they felt there was no harm in letting the stat slip when someone else had already put it out there.
Wherever Maddon's stat came from, it's impossible to pinpoint his motivations for repeating it on air. We never really knowwhy teams say what they say. Maddon might not actually believe the 50-run rating. Maybe he just wanted to make Molina feel good, pump up his trade value, or make his pitchers more confident in their batterymate. Maybe he wanted to justify his decision to use Molina as much as he had. Maybe framing is all an illusion and the Rays just wanted to pull the wool farther over everyone else's eyes (I don't think it's that one).
But imagine what it would mean for Molina’s value if his framing really was worth 50 runs. Without factoring in blocking, throwing, or framing, Molina was worth 0.2 WARP. The defensive systems agree that Molina’s good throwing added roughly as many runs as his poor blocking subtracted, so let’s call those a wash. Add 50 runs, or five wins, to his tally, and his total rises to 5.2, which would make him the most valuable Ray and tie him with Adam Jones and Giancarlo Stanton at 12th overall. Only 15 players had at least 5.0 WARP this season, so we’re talking about Jose Molina—chunky, 37-year-old Jose Molina, who started 80 games, made less than half as much money as sub-replacement player Juan Rivera, failed to hit his weight, and made two Tampa Bay radio hosts wonder what he had that Chris Gimenez and Jose Lobaton didn’t—being one of the best 15 players in baseball.
It does only so much good to spew stats about Molina’s special season. This is one of those times when “show” works better than “tell,” so here’s a list of the 10 pitches farthest away from the center of the strike zone (in any direction) that were called strikes with Molina catching.*

We all kind of have to be on our guard how we communicate with each other it seems.  If you can't communicate to someone in a language and context they can understand, the message will not be received and however brilliant your message is, you will have lost. 

The recent Scioscia - DiPoto dust-up illustrates where this generally ends. 

Over the weekend, Dipoto, unhappy with the coaching staff's decision to rely more on "feel" than data, according to the report, expressed his frustration during a series of meetings. Dipoto's message was met with a heated rebuttal from at least one coach as well as slugger Albert Pujols, the report stated.

Scioscia it seems wants to use the data while avoiding the tendency to Abuse the data. Players end up trying to play to the metric, the ultimate sin of Moneyball IMO. Too much data, too many idea, too many thoughts from too many sources and you wind up with the embodiment of the Yogi Berra quote "You can't think and hit at the same time". You can definitely think too much and end up in a position of paralysis by (over) analysis. Period! End of story. At some point, you have to set in down and JUST LET 'EM PLAY!! PLAY THE GAME, DON'T PLAY TO THE METRIC>

--

Tuesday, January 29, 2013

I Am Not a Troll

Since I've had a lot of new readers come by the site in recent days, I thought it was appropriate to re-state and clarify the intention behind this site.

I realize that by naming the site Sabermetrics Suck, it makes it appear that this blog is either an attempt to instigate, or a parody of an anti-sabermetrics traditionalist.

I assure you that it is neither.

Unfortunately, the title "Sabermetrics Are Good When Used in Moderation But Some People Take It Too Far" seemed a bit clunky.  Also, "Sabermetrics Suck" is definitely catchier.

The goal of the site is not to whine about "geeks with calculators sitting in their mother's basement."  I am not complaining that "these newfangled stats have ruined baseball." 

I accept that the battle between traditionalists and saberfans is pretty much over, and the saberfans have won.

It's pretty tough to deny that fact when I look at ESPN.com and see several baseball writers who focus on advanced statistics.  They even include WAR on their statistics page!

So then what is the point of the site?

In my eyes, the empowered sabermetric crowd has become the new arrogant elite.  It feels like many saberfans were held down and mocked by the traditionalists for so long, that now that they've gained acceptance, they carry themselves with a know-it-all attitude.

Prominent saber-minded writers like Rob Neyer and Keith Law certainly aren't helping that reputation.  

Instead of educating and enlightening people to the ways of sabermetrics, they seem to drive people away with their snarky arrogance.

Saberfans portray traditionalists as stubborn, unyielding old fools who refuse to give up antiquated ways of thinking.  Yet from my experience, saberfans can be even more stubborn and refusing to yield.

The best I can tell, this stubbornness comes from the saberfans having "numbers on their side."

"Oh, people can come up with statistics to prove anything. 14% of people know that."



The typical sabermetric thought process seems to be along these lines:
  1. Come up with a hypothesis.
  2. Find a statistic that backs up that hypothesis.
  3. Convince yourself that the statistic offers irrefutable proof.
  4. Refuse to yield.
It's kind of fun to do, actually!  Here's an example:
  1. Hypothesize that RBIs are an important measure of a player's offensive production.
  2. Check the rosters of every team in baseball, and add up the number of RBIs for each player.
  3. Find that the teams with the highest player RBI totals were the highest scoring offenses.
  4. Conclude that RBIs are a good measure of offensive production.
  5. Refuse to yield.
I'm not advocating abandoning statistical research in baseball.  I think it has indeed provided people with more insight about the game.  I regularly read sabermetrics-focused sites to try and gain more knowledge, and have learned some things that I find fascinating.

What I'm trying to do is to remind people that while baseball is about numbers, it is also more than just numbers.  It's about team chemistry, luck, clutch plays, and moments both amazing and bizarre that make it fun to be a baseball fan.

It's about a team having a "1 in 100" chance of winning, and still finding a way to pull out a victory.

I think that some people have just gotten a little too deep into the numbers to see what's really going on.  I'm trying to help people see the big picture.

The "pendulum has swung" to the side of the saberfans.  The blog represents the start of the back swing.

What bothers me the most is the attitude among many sabers that, if I choose not to embrace their hobby, I'm choosing to be ignorant. To paraphrase Socrates, I admit up front that I know everything about baseball because I know absolutely nothing. Heck, people like Don Zimmer or Jim Leyland, who've been close to the game for decades, admit they still haven't figured out this game -- but some schmuck with a calculator is gonna proclaim he has wisdom on his side? Ridiculous. 

That's not to say there isn't some wisdom to be gleaned from the new stats. But why do so many sabers have to be so doggone smug about it? They make statements like "RBI is a garbage statistic, and the only reason old-timers like Jim Leyland still use it is because they're stodgy and stubborn." Rather than affording longtime managers and others in the game the benefit of the doubt, many sabers use that longevity against them, as "proof" that people in the game resist change. 

I would be happy to enjoy baseball my way, and let others enjoy it their way. But when you go onto various blogs and get lambasted every time you mention RBI or pitchers' wins, it gets a little annoying. What cracks me up the most is, these "scientists" refuse to acknowledge the holes in their logic. One example: "RBI is a garbage stat because it's dependent on factors outside the batter's control." Okay, fine -- but why worship at the altar of bases on balls then? Isn't that also outside the batter's control? In order to draw a walk, the pitcher has to throw four balls outside the strike zone. 

Shouldn't that also give these "scientists" pause? 

It won't, because, while there are some sabers who are open-minded and approach their hobby with a scientific eye, by and large sabermetrics is a cult, not a science. It's all about "we're right and they're wrong -- and I'm going to be snide to anyone who disagrees with me." 




Giants Top Minor League Prospects

  • 1. Joey Bart 6-2, 215 C Power arm and a power bat, playing a premium defensive position. Good catch and throw skills.
  • 2. Heliot Ramos 6-2, 185 OF Potential high-ceiling player the Giants have been looking for. Great bat speed, early returns were impressive.
  • 3. Chris Shaw 6-3. 230 1B Lefty power bat, limited defensively to 1B, Matt Adams comp?
  • 4. Tyler Beede 6-4, 215 RHP from Vanderbilt projects as top of the rotation starter when he works out his command/control issues. When he misses, he misses by a bunch.
  • 5. Stephen Duggar 6-1, 170 CF Another toolsy, under-achieving OF in the Gary Brown mold, hoping for better results.
  • 6. Sandro Fabian 6-0, 180 OF Dominican signee from 2014, shows some pop in his bat. Below average arm and lack of speed should push him towards LF.
  • 7. Aramis Garcia 6-2, 220 C from Florida INTL projects as a good bat behind the dish with enough defensive skill to play there long-term
  • 8. Heath Quinn 6-2, 190 OF Strong hitter, makes contact with improving approach at the plate. Returns from hamate bone injury.
  • 9. Garrett Williams 6-1, 205 LHP Former Oklahoma standout, Giants prototype, low-ceiling, high-floor prospect.
  • 10. Shaun Anderson 6-4, 225 RHP Large frame, 3.36 K/BB rate. Can start or relieve
  • 11. Jacob Gonzalez 6-3, 190 3B Good pedigree, impressive bat for HS prospect.
  • 12. Seth Corry 6-2 195 LHP Highly regard HS pick. Was mentioned as possible chip in high profile trades.
  • 13. C.J. Hinojosa 5-10, 175 SS Scrappy IF prospect in the mold of Kelby Tomlinson, just gets it done.
  • 14. Garett Cave 6-4, 200 RHP He misses a lot of bats and at times, the plate. 13 K/9 an 5 B/9. Wild thing.

2019 MLB Draft - Top HS Draft Prospects

  • 1. Bobby Witt, Jr. 6-1,185 SS Colleyville Heritage HS (TX) Oklahoma commit. Outstanding defensive SS who can hit. 6.4 speed in 60 yd. Touched 97 on mound. Son of former major leaguer. Five tool potential.
  • 2. Riley Greene 6-2, 190 OF Haggerty HS (FL) Florida commit.Best HS hitting prospect. LH bat with good eye, plate discipline and developing power.
  • 3. C.J. Abrams 6-2, 180 SS Blessed Trinity HS (GA) High-ceiling athlete. 70 speed with plus arm. Hitting needs to develop as he matures. Alabama commit.
  • 4. Reece Hinds 6-4, 210 SS Niceville HS (FL) Power bat, committed to LSU. Plus arm, solid enough bat to move to 3B down the road. 98MPH arm.
  • 5. Daniel Espino 6-3, 200 RHP Georgia Premier Academy (GA) LSU commit. Touches 98 on FB with wipe out SL.

2019 MLB Draft - Top College Draft Prospects

  • 1. Adley Rutschman C Oregon State Plus defender with great arm. Excellent receiver plus a switch hitter with some pop in the bat.
  • 2. Shea Langliers C Baylor Excelent throw and catch skills with good pop time. Quick bat, uses all fields approach with some pop.
  • 3. Zack Thompson 6-2 LHP Kentucky Missed time with an elbow issue. FB up to 95 with plenty of secondary stuff.
  • 4. Matt Wallner 6-5 OF Southern Miss Run producing bat plus mid to upper 90's FB closer. Power bat from the left side, athletic for size.
  • 5. Nick Lodolo LHP TCU Tall LHP, 95MPH FB and solid breaking stuff.