[syndicated profile] 538_feed

Posted by Ramin Skibba

Aliens could be hiding on almost any of the Milky Way’s roughly 100 billion planets, but so far, we haven’t been able to find them (dubious claims to the contrary notwithstanding). Part of the problem is that astronomers don’t know exactly where to look or what to look for. To have a chance of locating alien life-forms — which is like searching for a needle that may not exist in an infinitely large haystack — they’ll have to narrow the search.

Astronomers hoping to find extraterrestrial life are looking largely for exoplanets (planets outside Earth’s solar system) in the so-called “Goldilocks zone” around each star: a distance range in which a planet is not too hot and not too cold, making it possible for liquid water to exist on the surface. But after studying our own world and many other planetary systems, scientists have come to believe that many factors other than distance are key to the development of life. These include the mix of gases in the atmosphere, the age of the planet and host star, whether the host star often puts out harmful radiation, and how fast the planet rotates — some planets rotate at a rate that leaves the same side always facing their star, so one hemisphere is stuck in perpetual night while the other is locked into scorching day. This makes it a complex problem that scientists can start to tackle with powerful computers, data and statistics. These tools — and new telescope technology — could make the discovery of life beyond Earth more likely.

These images show a star-forming region viewed through the Hubble Space Telescope (left) and a simulation of what it would look like as seen at a potential future observatory called the Large UltraViolet Optical Infrared Surveyor (right). New telescope technology could make the discovery of life beyond Earth more likely.

NASA

Two teams of astronomers are proposing different methods of tackling these questions. One argues that we should try to identify trends in the data generated by surveys of thousands of planets, while the other favors focusing on a handful of individual planets to assess where they’d lie on a scale from uninhabitable to probably populated.

Jacob Bean, an astronomer at the University of Chicago, advocates for the broader approach in a paper he and two other researchers published this spring. It’s not possible to know for sure if a distant planet is friendly to life, Bean says, so he and his colleagues aim to compare lots of planets to figure out which are most likely to host the conditions thought to be important to produce and sustain life. Determining how the amount of water or carbon dioxide in the atmosphere is correlated with distance from the star, for example, could help inform future, more targeted searches that use new space telescopes to look for worlds with hospitable climates. “How many planets do we need to look at to find the number of ‘Earth-like’ ones? That’s the multibillion-dollar question,” he said.

This is an artist’s illustration of systems of planets outside our solar system. Scientists are trying to figure out how to narrow the search for life on other planets.

NASA / ESA / M. Kornmesser (ESO)

Data that’s already available from NASA’s Kepler space telescope could help astronomers figure out what percentage of planets might be habitable. The Kepler mission revolutionized the study of exoplanets: It has allowed astronomers to analyze thousands of planets and their host stars, rather than the mere dozens or hundreds of extraterrestrial bodies — most of which are uninhabitable gas giants — on which we had data in the pre-Kepler period. In all, Kepler scientists have found 2,335 confirmed exoplanets, plus many more candidates waiting to be verified. With this information, researchers can get a better handle on how many solar systems have rocky planets circling at the right distance from the star or stars at the center, how often those stars zap the planets with radiation, how many planets are likely to have water, and how many feature other indications of a habitable climate. From there, scientists could deduce which of these factors are most important to the formation of planets that could develop life as we know it and determine which kinds of planets and stars are most worth focusing on.

That’s the big-picture strategy for the search for life. The other research, which was led by University of Washington astrobiologist David Catling and which will soon be undergoing peer review, claims that we’re ready to zoom in, going from questions about whether the conditions are right for life to whether life has actually developed on planets we’re interested in. His team proposes a statistical framework to evaluate these worlds.

In addition to a planet’s location and size, it matters whether its star gives off tons of radiation that could scorch off the atmosphere, leaving the planet with nothing to protect it from space weather. For example, the planets circling TRAPPIST-1 and Proxima Centauri, two red dwarf stars, exist in just such a threatening environment, and a new study by Harvard astrophysicists gives them a very low chance of supporting life. If a planet does have an atmosphere, then it matters what’s in it, as oxygen could be a sign of alien beings on the surface — even if they’re only tiny — and water vapor means it’s more likely that the climate is friendly to life. Methane, ozone and carbon dioxide could be positive signs too, but they can be produced by processes that don’t necessarily signify life, such as volcanoes.

This artist’s concept compares Earth to the exoplanet Kepler-452b, which sits in the so-called Goldilocks zone of its star. The illustration is just one guess as to what Kepler-452b might look like.

NASA / Ames / JPL-Caltech / T. Pyle

To put together as complete a picture as possible about a planet, astronomers need both high-resolution images of the solar system and a light spectrum of the planet, which reveals what gases are present in the planet’s atmosphere based on what wavelengths of light from the star appear or fail to appear after passing the planet. If they had access to more powerful telescopes than those in use today, astronomers would want to collect even more information, including details about the age and activity of the star; the planet’s size and distance from its star; the composition and pressure of the atmosphere; whether there were signs of water, such as glints of light reflecting off oceans; and what signs there were of geological processes such as tectonic or volcanic activity. Catling eventually hopes to be able to use this information to categorize planets so that you could say Planet Y has a 20 to 40 percent chance of having life, while Planet Z has an 80 percent chance.

But at the moment, his plan is largely theoretical.

“We’re not at the point where we can really calculate the frequency or probability of life, but it’s a useful exercise,” said Eric Ford, an astrophysicist and astrostatistician at Penn State University who was not involved in either study. “As in, ‘Here’s what we’d like to do, and, given our limitations, what’s the least-bad assumptions we can make about our prior knowledge?’ It turns an impossible problem into one we can gain a foothold in answering.”

The image on the left shows exoplanet Kepler-538 b. Is there life on 538? Astronomer Frank Drake (right) proposed a formula for estimating the number of alien civilizations in our galaxy.

NASA EXOPLANET ARCHIVE / GETTY IMAGES

Catling and his team proposed an approach that characterizes the chance that there’s life on a planet based on what’s known about the planet and its star, updating the chances as more data comes in. Distinguishing between the knowns and unknowns helps reduce the biases affecting the system and allows it to produce fewer false positives — but only if the humans doing the characterization have a good understanding of how likely it is that a set of planetary features indicates an inhabited planet versus a lifeless one. Since we haven’t yet found life beyond Earth, even in our own solar system, it’s hard to estimate these things with any confidence.

Catling’s approach evokes the famous “Drake equation,” put forth by astronomer Frank Drake in the 1960s as a way to figure out a ballpark number of extraterrestrial civilizations in the galaxy. The idea is to estimate how many stars there are, how many of those have planets, how many of those planets could support life, how many actually develop life, how many of those life-forms evolve into intelligent life, and so on. Starting with the simpler pieces and then building up to more complex ones helps us better understand the puzzle as a whole, even if some big pieces are still missing.

“This is a wish list,” Catling said of his group’s method, noting that we don’t have the technology to make it happen. “It’s like trying to find microbes before microscopes in the 16th century. We’re at that point now.”

Catling’s team is anticipating data from new telescopes, like the Transiting Exoplanet Survey Satellite and the James Webb Space Telescope, both set to launch next year. But they’re also looking beyond these to more sophisticated telescopes that may be built in the 2030s and 2040s. Those will likely have the capability to detect more potential signs of life from many more exoplanets.

Both approaches use a lot of data and tell scientists quite a bit about how planets form and whether they harbor the conditions that we think allow life to develop. But at least until those next-generation telescopes are finished, we will probably have to wait to find out if we’re alone in the universe.

“Even if we had an ‘Earth twin’ and detected oxygen and methane and glinting from oceans, we’ll never be 100 percent sure,” Catling said. “The only thing truly 100 percent would be [an alien] signal. … That would be a slam dunk.”

[syndicated profile] 538_feed

Posted by Neil Paine

A week ago, I wrote about how both the Los Angeles Dodgers and Houston Astros are in rare historical company this season. According to FiveThirtyEight’s Elo power ratings — which measure a team’s strength at any moment — each is playing roughly as well as the fabled 1927 Yankees played. But this season’s top-heaviness extends well beyond just the Astros and Dodgers. Each member of MLB’s ruling class this season is unusually strong, which suggests that, come October, we may be watching the the most stacked playoff fields in memory. That’s great news for fans — but it’s also really bad news for the wannabes and would-be Cinderellas that are currently chasing the front-runners.

One easy way to visualize the power balance of a league is to look at how its teams at any given ranking slot measure up to those from other seasons in the past. For example, the Dodgers have the best Elo rating (1602) of any top-ranked team through July 20 of a season in the expansion era (since 1961). Likewise, the Astros are by far the best second-ranked Elo team of the expansion era.

Go down the line, and each of Elo’s top six teams carries one of the strongest ratings in modern history for its slot. The third ranked Washington Nationals, for instance, are more like the top team in an average season than a mere third wheel. The Boston Red Sox would be running a strong third most seasons; this year, they’re a distant fourth. The Indians and Cubs can both tell similar stories.

As we approach the July 31 trade deadline, this is more than just an academic curiosity. A team’s willingness to pony up prospects for a better shot at the World Series is directly tied to how much good it thinks a trade will do. In a wide-open season, even teams outside the top tier of contenders could be convinced to roll the dice on an upgrade — particularly with the expanded wild-card format. But the stronger the top teams are, the less incentive teams on the periphery have to make a championship push. According to Elo, we haven’t seen a stronger crop of elite teams in the expansion era than this season’s top six.1

As recently as a few years ago, you could have lamented the lack of dominant teams at the top of the major leagues. At this same time in 2015, for instance, the leading Elo teams were among the weakest at their slots in the expansion era. But baseball’s era of parity seems to be officially over, with the game moving back toward imbalance. While a top-heavy MLB might never look like its basketball equivalent,2 it’s still going to be tougher than usual for aspiring contenders to break through — a fact you can bet every GM is keenly aware of in the lead-up to the deadline.

[syndicated profile] 538_feed

Posted by Anna Maria Barry-Jester, Maggie Koerth-Baker and Kathryn Casteel

Welcome to TrumpBeat, FiveThirtyEight’s weekly feature on the latest policy developments in Washington and beyond. Want to get TrumpBeat in your inbox each week? Sign up for our newsletter. Comments, criticism or suggestions for future columns? Email us, or drop a note in the comments.

Republican senators are still sorting out how to accomplish their goal of repealing and replacing the Affordable Care Act. Proposed strategies this week have been all over the place — passing GOP senators’ Better Care Reconciliation Act, doing nothing and letting Obamacare fail, and repealing the ACA and replacing it later. These are very different policy moves that would have very different effects on the health insurance landscape. But all of them have one thing in common: They assume the private insurance marketplaces set up under the Affordable Care Act continue to provide coverage through 2018. But there are at least three key decisions the Trump administration has to make that could affect what that looks like.

Insurers in most states have submitted proposals for the insurance plans they want to sell in 2018. Those plans are being negotiated and reviewed. According to data compiled by Charles Gaba at ACASignups.net, an independent tracker who supports the ACA, there may be an average 33 percent increase in premiums next year (before subsidies), and around 20 percent of that is due to uncertainty created by the current administration. Although negotiations are ongoing and some states have yet to reveal proposals, those estimates square with statements from insurers and health policy experts. A national average, however, masks huge variations by state: For example, insurers in Vermont and Oregon are seeking relatively small premium increases, while people in New Mexico and Georgia could see steep increases.

There are a variety of reasons for these increases. One is that it’s not clear whether the Trump administration will enforce the mandate that most people have health insurance or pay a fine, though it has already weakened it, which could mean fewer healthy people participating in the marketplaces and higher premiums for those who do. Another is that we don’t know whether President Trump will use marketing and advertising to promote enrollment as the Obama administration did, which has been shown to increase enrollment among healthier people. There are already signs he will not: The Daily Beast reported that the Department of Health and Human Services has been using money earmarked to promote enrollment to create videos attacking Obamacare. And Politico reported that the department canceled contracts with two companies that were supposed to help enroll people during the open enrollment period for next year. Again, fewer healthier people in the marketplaces results in higher premiums.

The other big outstanding question is how Trump and Congress will deal with payments owed to health insurance companies. Insurers are required to give the poorest enrollees discounts on things like deductibles and co-pays; the law intended for the federal government to reimburse those discounts — with Congress appropriating the funds. When Congress refused to do so, Obama made the payments anyway, and Congress sued his administration. (The court case is ongoing.) Trump has been making the payments on a month-by-month basis while threatening to pull them, using the payments as leverage in the ongoing repeal debate in Congress. More than half of people using the marketplaces receive these payments; if insurers aren’t reimbursed, it will push up premiums and could lead some to leave the marketplace.

What the administration does with these three things will help determine premiums for next year, which will also determine who can afford coverage and how much the federal government spends in subsidies. We’ve written about uncertainty and the markets before at TrumpBeat, but it’s worth bearing in mind that the ongoing debate in Congress affects not only future legislation, but also existing law.

Voting integrity: ID check

The 1993 National Voter Registration Act was aimed at making it easier for more Americans to vote by coupling registration opportunities with driver’s license and public assistance applications and making it harder to kick registered voters off the rolls. Now there’s evidence that Kris Kobach, the vice chairman of Trump’s new Presidential Advisory Commission on Election Integrity, wants to change that law.

In emails that were released last week as part of a lawsuit brought against him by the American Civil Liberties Union, Kobach wrote of planned legislation that would amend the National Voter Registration Act so that it explicitly allows states to require proof of citizenship — a passport or a birth certificate — for a voter to register. That news has only added fodder for the chorus of criticism aimed at Kobach and his commission.

We’ve written previously about the many problems with Kobach’s claims of widespread voter fraud. The short version: Nobody knows exactly how much illegal voting occurs, but all the available data points to it being extremely rare. Interestingly, though, it’s just as hard to prove the negative effects of the voter ID laws Kobach has championed.

As with illegal voting, it’s difficult to study voter ID laws, and nobody knows for sure whether they reduce turnout — effectively suppressing legal votes. No two states have exactly the same laws, and most of the laws have been in effect for less than five years. Maybe most importantly, there are confounding factors that make it difficult to tease apart cause and effect — for instance, the states that had adopted a strict voter ID law by 2015 already had lower voter turnout than those that did not. That comes from an analysis of peer-reviewed research on this topic published in May by Benjamin Highton, a political scientist at the University of California, Davis. He found just four studies that were designed to account for these kinds of real-world problems; all came up with results that suggest ID laws have very limited impacts (less than 4 percentage points) on voter turnout.

This is unlikely to be the final word on the subject, of course. Scientifically, this question is at the starting gate, not the finish line. But it’s possible that American politics is currently fighting a heated partisan battle over two risks — voter fraud and ID-law-related voter suppression — that are both extremely small.

Immigration: Opening the door

A central plank of Trump’s “America First” campaign platform was a pledge to limit immigration: He vowed to crack down on both illegal immigration and abuse of guest-worker programs that, Trump argued, push down wages for American workers. Since taking office, Trump has stuck with his “hire American” rhetoric, signing an executive order that tightened rules on visas for foreign workers and pledging to overhaul the legal immigration system

This week, however, Trump’s Department of Homeland Security said it would increase the number of visas for workers in low-wage industries that rely on temporary employees. The department announced that it was adding 15,000 H-2B visas for workers in seasonal, nonagricultural jobs, a 45 percent increase from what is normally issued for the second half of the fiscal year.

The H-2B announcement scrambled the usual politics of immigration. Business groups, which have often expressed concern about Trump’s hostility to immigration, praised the decision, saying that they need temporary foreign workers to fill labor shortages in the hospitality, construction and seafood industries. Meanwhile, the Center for Immigration Studies, an independent research organization that advocates for limited immigration, criticized the move, which it argued takes jobs away from a pool of U.S. workers. (The group noted that the White House declared this “Made in America” week and highlighted the administration’s hire-American policies.)

In its announcement, the Department of Homeland Security noted that businesses only qualify for the visas if they can prove that they are likely to “suffer irreparable harm” if unable to hire foreign workers. But critics are skeptical that labor shortages are as severe as companies claim. The Economic Policy Institute, a liberal think tank that Trump has aligned himself with on certain issues, argued in a recent report that almost all of the top 10 occupations for H-2B workers have relatively high unemployment rates and have experienced stagnant or declining wages since 2004. That, the group argues, suggests there is no shortage of available workers, at least on a national level. (The report does note that it is possible that states and local areas are experiencing a limited labor pool for these jobs but claims the H-2B program maintains a framework that exploits foreign workers.)

It isn’t clear why Trump agreed to increase the H-2B program even as his administration is cracking down on visas available to skilled workers in areas such as tech and limiting other routes of legal immigration. But it’s worth noting that Trump himself has used H2-B visas in the past to hire temporary workers for his resorts and hotels and even remarked on the difficulties of hiring part-time workers during a debate last year.

[syndicated profile] 538_feed

Posted by Edited by Oliver Roeder

Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-sized and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,3 and you may get a shoutout in next week’s column. If you need a hint, or if you have a favorite puzzle collecting dust in your attic, find me on Twitter.

Riddler Express

From Steven Pratt, a real-life electoral problem:

In Steven’s hometown, 11 fine folks are running in a primary for three at-large seats on the City Commission. Each voter may vote for up to three candidates. This election will reduce the field of candidates from 11 to six.

  1. How many different (legal) ways may a voter cast his or her ballot?
  2. How many different outcomes (excluding ties) are there for who advances to November’s general election?

Submit your answer

Riddler Classic

From Àlex Sierra, a cloak-and-dagger puzzle:

Twice before, I’ve pitted Riddler Nation against itself in a battle royale for national domination. War must wage once more. Here are the rules. There are two warlords: you and your archenemy, with whom you’re competing to conquer castles and collect the most victory points. Each of the 10 castles has its own strategic value for a would-be conqueror. Specifically, the castles are worth 1, 2, 3, … , 9 and 10 victory points. You and your enemy each have 100 soldiers to distribute between any of the 10 castles. Whoever sends more soldiers to a given castle conquers that castle and wins its victory points. (If you each send the same number of troops, you split the points.) Whoever ends up with the most points wins.

But now, you have a spy! You know how many soldiers your archenemy will send to each castle. The bad news, though, is that you no longer have 100 soldiers — your army suffered some losses in a previous battle.

What is the value of the spy?

That is, how many soldiers do you need to have in order to win, no matter the distribution of your opponent’s soldiers? Put another way: What k is the minimum number such that, for any distribution of 100 soldiers in the 10 castles by your opponent, you can distribute k soldiers and win the battle?

Submit your answer

Solution to last week’s Riddler Express

Congratulations to 👏 Elaine Hou 👏 of Tampa, Florida, winner of the previous Express puzzle!

You and your two older siblings are sharing two extra-large pizzas and decide to cut them in an unusual way. You overlap the pizzas so that the crust of one touches the center of the other (and vice versa since they are the same size). You then slice both pizzas around the area of overlap. Two of you will each get one of the crescent-shaped pieces, and the third will get both of the football-shaped cutouts. Which should you choose to get more pizza: one crescent or two footballs?

You’ll get more pizza by eating the two footballs.

To show why, solver Sai Rijal began with the following shape in the middle of the pizzas:

Since sides AB, BC, CD, DA, and BD are all radii of one of the circular pizzas, Sai explained, they form two equilateral triangles: ABD and CDB. Because of this, the angles ABC and ADC are each 120 degrees. Therefore, the slice of the red pizza bound by ABC, and the slice of the blue pizza bound by ADC, have the area \((1/3)\pi r^2\) — they’re just one third of each pizza. Let’s remove these 1/3 slices from each of the two pieces of football slices you have. You have 2/3 of a pizza. This is your fair share, because there were two pizzas and three eaters. The remaining segments are extra pizza you have earned through mathematics!

Although it wasn’t necessary for answering the question, you could also go further and calculate the specific areas of the crescents and the footballs. Assume, for simplicity, that each pizza has a radius of 1. It turns out that the two footballs have an area of \(\frac{4\pi -3\sqrt{3}}{3}\), or about 2.46; one crescent has an area of \(\frac{2\pi+3\sqrt{3}}{6}\), or about 1.91. Solver Zack Segel shared his lovely pen and paper work to that end:

GitHub user mimno even created an interactive Monte Carlo simulation of the pizzas, again finding that you’re better off going for the footballs. And others, I’m told, solved the problem by ordering two actual extra-large pizzas. Bon appétit!

Solution to last week’s Riddler Classic

Congratulations to 👏 Neema Salimi 👏 of Atlanta, winner of the previous Classic puzzle!

In the National Squishyball league, the owner of the top-seeded team (i.e., you) gets to select the length of the championship series in advance of the first game, so you could decide to play a single game, a best two out of three series, a three out of five series, etc., all the way up to a 50 out of 99 series. The owner of the winning team gets $1 million minus $10,000 for each of the victories required to win the series, regardless of how many games the series lasts in total. Thus, if the top-seeded team’s owner selects a single-game championship, the winning owner will collect $990,000. If he or she selects a four out of seven series, the winning team’s owner will collect $960,000. The owner of the losing team gets nothing. Your team has a 60 percent chance of winning any individual game. How long a series should you select in order to maximize your expected winnings?

You should select a best-of-25 series, where the first team to 13 wins takes the title. You stand to win about $736,222 on average.

This problem was most commonly approached computationally, with most solvers turning to Excel to guide them through the postseason strategizing. Solver Andrew Hoffman explained how he built such a spreadsheet:

At any given point in the series, each team has a certain number of games left that they need to win in order to win the series (call this number A for Acme and B for Boondocks). They start with the same number. After each game, there’s a 60 percent chance A decreases by 1 and a 40 percent chance B decreases by 1. If either team has 0 games to go, that team has won. You can then build a table recursively for the win probability at cell (A, B), which we’ll call P(A, B) = 0.6 * P(A-1, B) + 0.4 * P(A, B-1). For example, P(2, 1) = 0.6 * P(1, 1) + 0.4 * P(1, 0) = 0.6 * (0.6 * P(0, 1) + 0.4 * P(1, 0)) + 0.4 * 0 = 0.6 * (0.6 * 1 + 0.4 * 0) = 0.6 * 0.6 = 0.36. Build the table through P(50, 50). Then for each N, multiply P(N, N) by the winnings if N games are required: 1,000,000 – 10,000N. This gives the expected winnings for Acme for each N. The maximum value of $736222.04 occurs at N = 13, referring to a 13-out-of-25 series.

Andrew also submitted his graphical results of this project, showing that the expected value (EV) peaks at a series requiring 13 wins.

Others, such as Tyler Barron, Chris Ketelsen and Justin Brookman, turned to Python code, and you can find their alternate solutions above.

Want to submit a riddle?

Email me at oliver.roeder@fivethirtyeight.com.

[syndicated profile] 538_feed

Posted by Walt Hickey

You’re reading Significant Digits, a daily digest of the numbers tucked inside the news.


-4 points

Favorability rating for the New York Yankees, the only negative spread in the league, according to a SurveyMonkey Audience poll. The most broadly-liked teams are the Cubs, Cardinals and Royals. [FiveThirtyEight]


9 years

O.J. Simpson was granted parole yesterday and will be released from Lovelock Correctional Facility on October 1 after nine years behind bars for a Las Vegas robbery. [ABC News]


$10

The for-profit Ark Encounter, a bible-themed amusement park, sold a piece of land the county says is worth $48 million to a non-profit affiliate, Crosswater Canyon, for $10. The park is attempting to be exempt for a new safety tax. [Lexington Herald Leader]


28 percent

Sears’ stock is up 28 percent from a month ago, a rally bolstered by new cash injections and the announcement yesterday that the company will sell appliances on Amazon.com with Alexa technology allowing people to control their appliances with voice commands. [CNBC]


46 percent

Google Glass, which as recently as two years ago was the cultural signifier that let the world know you make too much money, has begun a second life as the industrial wave of the future, with manufacturers and conglomerates using the wearable technology to optimize performance. GE reports a 46 percent decrease in the time it takes a warehouse picker using the product, and surveyed employees overwhelmingly said it would reduce errors. [Wired]


70 foreign workers

In the middle of what the White House pitched as “Made in America” week, the Trump Mar-a-Lago club asked the government to allow them to hire seventy foreign workers in the fall as they can not find American cooks, waiters and housekeepers. [The Washington Post]


If you see a significant digit in the wild, send it to @WaltHickey.

[syndicated profile] 538_feed

Posted by Anna Maria Barry-Jester

It has been a very confusing week in federal health care policy. Early in the week, the Senate abruptly abandoned an effort to pass a bill to repeal and replace parts of the Affordable Care Act after four senators said they wouldn’t support it. Senate Majority Leader Mitch McConnell then said he would push to pass a bill that would repeal parts of the law in two years, buying the Senate more time to come up with a replacement. Three senators quickly said they wouldn’t support that approach, and the Senate all but gave up. That lasted until President Trump called various senators to the White House, pressuring them to keep working until they agreed on a bill, just a day after he suggested letting Obamacare fail.

Like I said, it’s been a confusing week.

There are some obvious reasons that the GOP is having a hard time coalescing around a plan. Part of the problem is that Republicans don’t agree on the priorities for repeal. The conservative wing of the party wants to peel back regulations and reduce federal spending. Moderates are concerned about changes to Medicaid, the state-federal health insurance program for the poor, and reduced support for low-income people who are buying private insurance. Two have also said they don’t support defunding Planned Parenthood for a year, as would happen under each of the bills.

Then there’s the challenge of the GOP bills’ lack of support from the public.

You’d be forgiven for not knowing where things stand on the seven-year Republican promise to repeal and replace Obamacare. There are at least three bills floating around, each with opponents within the Republican Party and varying levels of detail about what the effect of each plan would be on the health insurance landscape.

So here’s a roundup of current proposals, what we know about their impact and who supports them:

The Senate’s Better Care Reconciliation Act

What’s in the bill: This legislation — a new version of the measure that has been debated since the first draft was released on June 22 — would reduce subsidies for people who buy insurance on the marketplaces set up by Obamacare. Some of the Obamacare taxes would be repealed (though two taxes on the wealthy would remain in place). It would allow states to opt out of many of the insurance market regulations, including mandatory coverage of “essential health benefits,” which include maternity care and mental health treatment.

The legislation would also freeze the Obamacare expansion of Medicaid. The expansion opened up the program to all people earning under 138 percent of the federal poverty limit in states that opted in.4 Starting in 2020, the expanded part of the program would take no new enrollees, and states would be reimbursed significantly less for those who continue to be covered (under current law, the federal government would pick up 90 percent of the cost of those enrollees). The bill would also put most of the rest of the Medicaid program on a budget. States would receive a maximum fixed amount per enrollee, or a lump sum for the whole state program, rather than the open-ended reimbursements they get today.

Altogether, the bill would decrease costs for higher-income, healthier people without employer-sponsored insurance and would increases costs for lower-income, sicker people.

What we know about its effects: The bill was posted Thursday, and a Congressional Budget Office analysis of it released the same day found that 15 million fewer people would have insurance coverage next year and 22 million fewer would be covered in 2026, compared with how many would be covered under current law. Premiums would increase over the next two years even as the plans those premiums pay for cover less, according to the CBO, but would decrease starting in 2020. This plan would reduce the federal deficit by $420 billion over a decade, according to CBO estimates.

Who supports it: It’s not yet clear how senators feel about it, although there are clues from previous, similar iterations of the bill. It’s unlikely to please conservative Sens. Ted Cruz of Texas, Mike Lee of Utah, Rand Paul of Kentucky and Ron Johnson of Wisconsin, who have said that previous iterations of the Senate bill, similar in many ways to this version,5 didn’t do enough to cut regulations. Several moderate Republicans have also expressed concern about the proposed changes to Medicaid funding.

The Senate’s Better Care Reconciliation Act, with an amendment from Ted Cruz

What’s in the bill: This version of the bill is largely similar to the one above, but it includes a complicated amendment adopted from a proposal by Cruz. The change would allow insurers who sell regulated, Obamacare-compliant plans to also sell largely unregulated plans. Insurers would be required to offer coverage to people with pre-existing conditions and cover essential health benefits for the Obamacare-compliant plans, and they wouldn’t be allowed to charge based on how sick a person is. Insurers selling these plans, however, could likely also sell plans that cover far fewer services and could deny people coverage based on their health status or charge them more for these plans.

What we know about its effects: The amendment is so complicated that HuffPost reported that it could be a month before it can be fully analyzed by the CBO. The Senate needs a score of the bill before it can proceed with a vote, so it asked the Department of Health and Human Services for an analysis; HHS hired the consulting firm McKinsey to produce a report, which the Washington Examiner obtained Wednesday. The report purportedly shows that premiums would drop under the Cruz plan, but experts say it offers little in the way of useful information, as Sarah Kliff explained at Vox. For starters, it looks at how the amendment would work if added to existing law, not the Republican bill.

Experts believe it would gut protections for people with pre-existing conditions by pricing them out of the market. The insurance industry has come out hard against the Cruz plan. In a memo circulated publicly, America’s Health Insurance Plans, one of the insurance industry’s largest trade associations, condemned the amendment, while Blue Cross Blue Shield Association, a lobbying group representing the insurer, told Senate leaders the proposal was “unworkable.” Health policy experts warn that if the Cruz amendment becomes law, healthy people will buy on the unregulated, cheaper market, leaving people with more health care needs on the Obamacare markets.

Who supports it: Four senators have previously said they would not vote to move forward with debating this bill: Lee, Paul, Susan Collins of Maine and Jerry Moran of Kansas. Only 14 senators have expressed clear support, according to The New York Times.

Obamacare Repeal Reconciliation Act, aka repeal and delay

What’s in the bill: This bill would repeal much of the ACA starting in 2020, though some changes would take effect immediately.

The bill would repeal the ACA’s expansion of Medicaid, as well as the subsidies that help people buy insurance on the private marketplace. It would repeal all of the taxes created by the ACA and eliminate a fund created for public health work after 2018. It includes numerous other changes as well, such as eliminating requirements on what services state Medicaid programs must cover. The repeal bill would, however, add $1.5 billion over two years to respond to the opioid crisis and funding for community mental health centers.

It would also retroactively (to 2016) get rid of the requirement that employers offer coverage to employees and the requirement that most people have insurance — by eliminating the financial penalties for both.

What we know about its effects: The CBO (which has had a busy week) released an analysis of this bill on Wednesday. The agency thinks 32 million additional people would be uninsured in 2026 (compared with current law) and that the federal deficit would be reduced by $473 billion during that time. The estimated increase in the number of uninsured people includes 19 million who would fall from the Medicaid rolls. But millions would also be uninsured as a result of an upended insurance market, according to the analysis — the agency found that premiums would roughly double by 2026 and that about three-quarters of the population would live in places where no insurer would be willing to sell coverage in the private market.

Of course, this strategy is built around developing a replacement plan over the next two years. But it’s impossible to say what that would look like. Meanwhile, with the immediate repeal of the individual insurance mandate, some 17 million would be expected to lose coverage next year.

Who supports it: Notably, Sen. Lamar Alexander of Tennessee said after a meeting on Wednesday that he didn’t think there were even 40 senators who supported the strategy of repeal and then replace later. Several senators, including Collins, Lisa Murkowski of Alaska, Shelley Moore Capito of West Virginia, have already come out against the approach.

[syndicated profile] 538_feed

Posted by Perry Bacon Jr. and Harry Enten

Arizona Sen. John McCain has been more critical of President Trump in recent weeks.

Melina Mara / The Washington Post via Getty Image

Arizona Sen. John McCain was diagnosed with brain cancer this week, his office announced on Wednesday. He has a brain tumor called a glioblastoma and is now in Arizona, recovering from a surgery to remove a blood clot above his left eye. It is not immediately clear whether McCain will return to the Senate in the next few days or weeks or if his absence will be longer. FiveThirtyEight wishes McCain and his family the best in his recovery.

McCain is a well-respected figure in both parties, which led to an outpouring of well-wishers that included both and President Trump and former President Obama. He is also a landmark figure in American politics — and has at times been both an ally and opponent of Trump — so we thought it was important to look at McCain and his potential absence in that context.

McCain, 80, has often been described — and has sometimes described himself — as a “maverick.” But as we discussed in February, the truth is somewhat more complicated, with McCain having gone through various phases since he was first elected to the Senate in 1986.

After having been a fairly typical Republican for his first dozen or so years in the Senate, McCain ran to the left of front-runner and eventual nominee George W. Bush in his 2000 presidential bid. He made campaign finance reform one of the central themes of his candidacy. In the run-up to his 2000 candidacy, he often sided with moderates or liberals on key votes, as can be seen in how often McCain differed from the conservative position on the American Conservative Union scorecard:

Disenchanted with the Bush presidency, McCain flirted in 2004 with joining the ticket of Democratic nominee John Kerry. But in 2007, on the eve of his next presidential run, McCain became one of the leading advocates of increasing the number of troops in Iraq, which was a much more popular idea on the right than the left. Again showing his ideological flexibility as the GOP’s 2008 presidential nominee, McCain considering picking Connecticut senator and former Democrat6 Joe Lieberman as his running mate, who supported hawkish national security policies like McCain but also backed abortion rights and held many other liberal views. But instead McCain opted for Sarah Palin, who was strongly against abortion rights and beloved by the conservative base.

In the Obama years, McCain voted with Republicans on the big issues, such as opposing the Affordable Care Act. Yet, he worked with Democrats on a 2013 bill that would have granted citizenship to undocumented immigrants. And at the end of last year’s presidential campaign, he declared that he would not support Donald Trump. (Earlier in the campaign, Trump had mocked McCain’s war record, saying, “He’s not a war hero. He’s a war hero because he was captured. I like people who weren’t captured.”)

So is McCain a flip-flopper — or an opportunist? Neither, really; instead, he’s been fairly consistent.7 For most of his tenure in office, McCain has not shifted his legislative philosophy so much as the rest of Congress has shifted — and become more partisan and ideological — around him. Although McCain voted with Republicans about the same amount in both periods, he went from being slightly more partisan than most senators between 1987 and 1996 to among the least partisan senators in the years since then because party-line voting increased so much during that period.

McCain is probably best understood not through a label like “maverick,” but instead through his actual positions. He is consistently pro-military, pro-intervention and hawkish on national security issues, from Iraq to Russia to North Korea. He is also most passionate about those matters, leaving writing bills on issues such as abortion or tax reform largely to his colleagues. On domestic policy, McCain can be all over the place; for example favoring a path to citizenship for undocumented immigrants at times but other times saying he opposes that idea.

How all of this has played out in the era of Trump is complicated. McCain has voted with Trump’s position 90.5 percent of the time, according to FiveThirtyEight’s Trump Score. Still, in such a highly partisan epoch, only two Republican senators — Maine’s Susan Collins and Kentucky’s Rand Paul — have voted with Trump less often this year.

And as time has gone on, McCain has come to be more critical of the president. After almost always voting with Trump from January to April, he has voted against Trump’s position on three of the last four Senate votes in which the White House has taken a clear stance. He was one of the strongest voices in the Senate in the chamber’s June decision, by a 97-2 vote, to add new sanctions against Russia and require congressional approval to lift existing ones, an idea that the Trump administration opposed.

McCain has built relationships with figures like Defense Secretary Jim Mattis, who is trying to steer Trump toward more traditional foreign policy stands. And from the beginning of Trump’s tenure, McCain’s been taking trips abroad and telling anyone who will listen that the president’s comments about Muslims and other controversial stances are not representative of all Americans. It is hard to think of that many other examples of U.S. senators repeatedly questioning the president of their own party while travelling overseas.

McCain has not just differed from the president on foreign policy. The Arizona senator has called for the creation of a special congressional committee to investigate Trump’s ties to Russia, along the lines of what has created in the Nixon era to probe Watergate. He has also urged reporters to keep pushing for details on the Trump-Russia issue. When news emerged recently about a meeting between Donald Trump Jr. and Russian figures, McCain pointedly said, “I guarantee you there will be more shoes to drop, I can just guarantee it.”

McCain’s critics on the center and the left have more room to be critical about his positions on domestic policy. He is unquestionably establishment-friendly and media-friendly, making constant appearances on Sunday morning talk shows, which often makes him seem more liberal than his actual record. Historically, he’s been quite conservative on economic policy. And while at times McCain has been sarcastic and pessimistic in his assessment of the Republicans’ health care bill — criticizing Republicans’ secrecy in drafting it and recently suggesting that the GOP work on a bipartisan bill — he’s been slow to articulate his own position on the legislation, although a statement he issued last week suggested the bill needed significant revisions.

Still, even if they have taken the same stances on most issues for the last six months, John McCain is a very different kind of Republican than Donald Trump, He’s been one of the leading figures of a group of senators that includes Nebraska’s Ben Sasse, Arizona’s Jeff Flake, South Carolina’s Lindsey Graham and others. These senators are not moderate or anti-Trump, at least according to their voting records. But they are different from the president in style and tone and are willing to criticize the White House on some issues. Unlike Trump, they can also be gracious and conciliatory toward their political opponents. In 2008, when rank-and-file Republican voters at his events spoke negatively about Obama’s character and background (falsely suggesting that Obama was an “Arab”, for example), McCain defended his Democratic rival and urged a focus on policy differences. He declined to pointedly attack Obama’s relationship with the controversial pastor Rev. Jeremiah Wright

So if you are Donald Trump, the absence of McCain is complicated. His absence from the Senate will make it even harder for Republicans to pass health care, as he’s backed Trump’s agenda most of the time. But he’s been a loud and proud critic of Trump when he opposed him — and he’s been opposing Trump more often in recent weeks.

making space to be creative

Jul. 20th, 2017 11:32 pm
[syndicated profile] wwdn_feed

Posted by Wil

One week and about ten hours ago, I decided to step away from Twitter for a little bit. The specific details aren’t important, and I suspect that many of you reading this now are already nodding in agreement because you grok why. But I took it off my phone, and I haven’t been to the website on my desktop since. For the first 48 hours, I spent a lot of time wondering if I was making a choice that mattered, and thinking about how I wasn’t habitually looking at Twitter every few minutes to see if I’d missed anything funny, or to see the latest bullshit spewing forth from President Fuckface’s mouthanus. I was, ironically, spending more time thinking about Twitter since I wasn’t using it than I spent thinking about it when I was.

It started out as a 24 hour break, then it was a 48 hour break, then it was the weekend, and here we are one week later and I don’t feel like I’m missing anything important. I feel like I’ve given myself more time to be quiet and alone, more time to reflect on things, and I’ve created space in my life to let my mind wander and get creative.

I’m not creating as much as I want to, and I’m starting to feel like maybe I’ll never be able to create as much as I want to, but I’ve gotten some stuff done this week that probably wouldn’t have gotten done if Twitter had been filling up the space that I needed.

Here’s a little bit from my blog post that became a short story that grew into a novella that is now a novel, All We Ever Wanted Was Everything:

My mother was leaning against her car, talking with one of the other moms, when we arrived. My sister was throwing a Strawberry Shortcake doll into the air and catching it while they watched. I walked out of the bus and across the blazing hot blacktop to meet her.

Willow, catch!” My sister cried, sending Strawberry Shortcake in a low arc toward me. I caught her without enthusiasm and handed her back. “You’re supposed to throw her to me!” Amanda said, demonstrating. Her doll floated in a lazy circle, arms and legs pinwheeling, before falling back down into my sister’s waiting arms. The writer in me wants to make a clever reference to how I was feeling at that moment, about how I could relate to Strawberry Fucking Shortcake, spinning out of control in the air above us, but it feels hacky, so I’ll just talk about how I wanted to make the reference without actually making the reference, thereby giving myself permission to do a hacky writer’s trick without actually doing it. See, there’s nothing tricky about writing, it’s just a little trick!

It’s still in the first draft, and I may not keep all or even any of it, but after putting it aside for months while I was depressed about too many things to look at it, it feels so good to be back into this story.

Oh, speaking of writing, I got notes back from the editors on my Star Wars 40th anthology submission. I thought that, for sure, they’d want me to rework a ton of it, but all they asked me to do is change a name! And they told me it was beautiful! So I’ve been feeling like a Capital-W Writer for a few days.

And speaking of feeling happy for a change, Hasbro and Machinima announced that I’m a voice in the next installment of the Transformers animated series, Titans Return. And it feels silly to care about this particular thing, but Daily Variety put my name in the headline, which made me feel really, really good.I’ve always felt like the only thing that should matter is the work, and that the work should be able to stand on its own … but that’s not the reality even a little bit. Daily Variety is the industry’s paper of record, so when it chooses to put you in the headline of a story, people pay attention and it matters in the way that can make the difference between getting called for a meeting, or the last ten years of my life as an actor.

It’s also a good reminder that, even if I’m not getting the opportunities I want to be an on-camera actor, it is entirely within my power to create the space I need to be a writer.

 

[syndicated profile] 538_feed

Posted by Harry Enten

The Yankees’ longstanding self-important and overpaid ways have been pushed aside by a plucky band of youngsters and rising stars. So is it now fair to say that the franchise once proudly known as the “Evil Empire” is no longer baseball’s most hated team?

Nope. As far as most Americans are concerned, the Yankees are still plenty hateable, thank you very much. In fact, they’re the most hated MLB team.

That’s according to a FiveThirtyEight-commissioned SurveyMonkey Audience poll of 989 self-described baseball fans, conducted June 30 to July 8.8 The poll does provide the Yankees with one talking point: They received more votes as people’s favorite team than any other franchise. But a deeper look at the results reveals that the Cubs are a much better fit for the title of America’s best-liked team (if such a thing even exists).

Because baseball fandom is highly regional, Americans have many favorite teams. The Yankees top the national list of favorites, but with just 10 percent of the vote.

The fight to be America’s favorite team is very close

Share of respondents who said a given team was their favorite

TEAM SHARE FAVORITE TEAM
1 New York Yankees 10%
2 Boston Red Sox 8
3 Chicago Cubs 8
4 Atlanta Braves 8
5 Los Angeles Dodgers 5
6 San Francisco Giants 5
7 Texas Rangers 4
8 St. Louis Cardinals 4
9 Detroit Tigers 4
10 Philadelphia Phillies 4
Seattle Mariners 4
12 New York Mets 3
13 Cincinnati Reds 3
14 Cleveland Indians 3
15 Minnesota Twins 3
16 Baltimore Orioles 3
17 Arizona Diamondbacks 2
Pittsburgh Pirates 2
19 Los Angeles Angels 2
20 Colorado Rockies 2
Kansas City Royals 2
Milwaukee Brewers 2
23 Chicago White Sox 2
Oakland Athletics 2
25 San Diego Padres 2
26 Houston Astros 2
27 Tampa Bay Rays 1
28 Washington Nationals 1
29 Miami Marlins 1
30 Toronto Blue Jays <1

Percentages are rounded.
Responses from a survey of 989 American baseball fans conducted from June 30 to July 8, 2017.

Source: SurveyMonkey

The Boston Red Sox, Chicago Cubs and Atlanta Braves come very close behind at 8 percent, while the West Coast’s bitter rivals, the Los Angeles Dodgers and San Francisco Giants, aren’t far behind at 5 percent. The difference between the top teams is so small that the Yankees would tie for third place (with the Braves), trailing the Cubs and Red Sox, if you exclude fans from the state of New York.

Breaking down the favorite-team results by census region we can see that, unsurprisingly, different areas of the country prefer different teams. While the Yankees are in a tight fight for first place with the Red Sox in the Northeast,9 they don’t come anywhere close to being the favorite team (or even breaking 10 percent) in any other region.

Every region has its own favorite

Share of respondents who said a given team was their favorite by census region

NORTHEAST SOUTH MIDWEST WEST
TEAM SHARE TEAM SHARE TEAM SHARE TEAM SHARE
Yankees 28% Braves 22% Cubs 22% Giants 15%
Red Sox 23 Rangers 12 Tigers 12 Dodgers 13
Phillies 16 Yankees 9 Cardinals 11 Mariners 12
Mets 12 Red Sox 8 Twins 10 D-backs 7
Pirates 10 Orioles 6 Indians 9 Angels 6
Tigers 2 Cubs 5 Reds 8 Yankees 6
Dodgers 2 Astros 4 Brewers 8 Rockies 5
Cubs 1 Cardinals 4 Royals 5 Padres 5
White Sox 1 Reds 3 White Sox 4 Athletics 5
Reds 1 Mets 3 Yankees 3 Red Sox 4
Rockies 1

Percentages are rounded and may not add to 100.
Responses from a survey of 989 American baseball fans conducted from June 30 to July 8, 2017.

Source: Surveymonkey

The Braves are first in the South, with the Texas Rangers second.10 The Midwest is dominated by the Cubs, trailed by the Detroit Tigers, the Minnesota Twins and the Cubs’ arch-rival, the St. Louis Cardinals, all in double digits. Meanwhile, the Giants are just ahead of the Dodgers and the Seattle Mariners in the West.11

What isn’t regional is how well the Cubs are liked — that finding came up pretty much everywhere. We often think of fandom as stopping at one’s favorite team, but fans can like (or dislike) more than one team. So in addition to asking fans who their favorite team was, we also asked each fan whether they had a favorable or unfavorable view of 10 randomly assigned teams. That means the sample size for each team’s favorable or unfavorable rating was a little over 300 fans. For most teams (19 of 30), 67 percent or less of the fans we polled felt they could offer an opinion — again, suggesting the regionality of baseball. But more than 80 percent of fans had a rating for the Cubs, and they were well-liked by nearly everyone.

The Cubs are the most liked team and the Yankees the least

Favorable and unfavorable ratings for every MLB team when respondents were each asked their views on ten randomly assigned teams

TEAM RATED BY FAVORABLE UNFAVORABLE NET FAV.
Chicago Cubs 81% 67% 14% +53
St. Louis Cardinals 69 50 19 +31
Kansas City Royals 64 47 17 +30
Boston Red Sox 84 56 28 +28
Colorado Rockies 55 41 14 +27
Baltimore Orioles 67 46 21 +25
San Francisco Giants 69 46 23 +23
Minnesota Twins 58 40 18 +22
Pittsburgh Pirates 64 43 21 +22
Houston Astros 64 43 21 +22
Cleveland Indians 69 45 24 +21
Seattle Mariners 61 41 20 +21
San Diego Padres 61 41 20 +21
Atlanta Braves 70 45 25 +20
Arizona Diamondbacks 59 39 20 +19
Detroit Tigers 58 38 20 +18
Texas Rangers 64 41 23 +18
Los Angeles Angels 63 40 23 +17
Chicago White Sox 69 43 26 +17
Milwaukee Brewers 60 38 22 +16
Oakland Athletics 57 36 21 +15
Los Angeles Dodgers 73 44 29 +15
Cincinnati Reds 61 36 25 +11
Washington Nationals 61 36 25 +11
Tampa Bay Rays 57 34 23 +11
New York Mets 78 43 35 +8
Toronto Blue Jays 58 33 25 +8
Miami Marlins 60 33 27 +6
Philadelphia Phillies 62 33 29 +4
New York Yankees 92 44 48 -4

Percentages and percentage points are rounded.
Responses from a survey of 989 American baseball fans conducted from June 30 to July 8, 2017.

Source: SurveyMOnkey

Sixty-seven percent of baseball fans nationally had a favorable view of the Cubs, while just 14 percent had an unfavorable view. Amazingly, this gave the Cubs the highest favorable rating in the poll in addition to a tie with the Colorado Rockies for the lowest unfavorable rating. In every region of the country, the Cubs had a favorable rating of above 60 percent and an unfavorable rating of 20 percent or less. The Cardinals (at +31 percentage points) were a distant second to the Cubs (at +53 percentage points) when it came to net favorability (favorable rating minus unfavorable rating).

The Yankees are an entirely different story. While a fairly high 44 percent of fans have a favorable view of the Yankees, they are the only team in the country for which more fans hold an unfavorable view (48 percent) than favorable view. (For the sake of context, no other team has an unfavorable rating above 35 percent.) Yankee fans will be particularly stung by the fact that more fans have a favorable view of the rival Red Sox (56 percent) than the Yankees.

Not only are the Yankees generally disliked, they’re also outright hated by more fans than any other team. When asked to give their least favorite team, an astounding 27 percent of fans said theirs was the Yankees. The Red Sox were a distant second at 10 percent.

The Yankees are America’s most hated team

Share of respondents who said a given team was their least favorite

TEAM SHARE LEAST FAVORITE TEAM
1 New York Yankees 27%
2 Boston Red Sox 10
3 Los Angeles Dodgers 5
4 Arizona Diamondbacks 5
5 Chicago Cubs 4
6 Washington Nationals 4
7 Miami Marlins 3
8 Atlanta Braves 3
9 Chicago White Sox 3
New York Mets 3
11 San Francisco Giants 3
12 Detroit Tigers 3
13 Toronto Blue Jays 3
14 St. Louis Cardinals 2
15 Oakland Athletics 2
Texas Rangers 2
17 Cleveland Indians 2
18 Philadelphia Phillies 2
19 Pittsburgh Pirates 2
20 Minnesota Twins 2
21 Los Angeles Angels 2
Milwaukee Brewers 2
23 Cincinnati Reds 1
24 Houston Astros 1
25 San Diego Padres 1
26 Tampa Bay Rays 1
27 Baltimore Orioles 1
Colorado Rockies 1
29 Kansas City Royals 1
30 Seattle Mariners 1

Percentages are rounded and may not add to 100.
Responses from a survey of 989 American baseball fans conducted from June 30 to July 8, 2017.

Source: Surveymonkey

The Yankees were the least favorite team in every region in the country, and it wasn’t a particularly close race anywhere.

Every part of America hates the Yankees

Share of respondents who said a given team was their least favorite by census region

NORTHEAST SOUTH MIDWEST WEST
TEAM SHARE TEAM SHARE TEAM SHARE TEAM SHARE
Yankees 34% Yankees 25% Yankees 28% Yankees 26%
Red Sox 17 Red Sox 10 Cubs 11 Dodgers 12
D-Backs 6 D-Backs 6 Cardinals 8 Red Sox 8
Mets 6 Nats 6 Red Sox 5 Giants 6
Cubs 4 Braves 5 Indians 5 Athletics 5
Phillies 4 Tigers 5 White Sox 4 D-Backs 4
Nats 3 Blue Jays 4 D-Backs 4 White Sox 4
Rockies 3 Marlins 4 Braves 4 Marlins 3
Marlins 3 Mets 3 Marlins 3 Rangers 3
Dodgers 2 Phillies 3 Twins 3 Angels 3
Brewers 2 Rangers 3 Nats 3

Percentages are rounded and may not add to 100.
Responses from a survey of 989 American baseball fans conducted from June 30 to July 8, 2017.

Source: Surveymonkey

Interestingly, we do see that there is at least some correlation with being well-liked in a region and having haters. The Red Sox are the second-most disliked team in the Northeast,12 the Cubs are the second-most disliked team in the Midwest, and the Dodgers are the second-most disliked team in the West. Perhaps fans of other teams are just jealous of these teams’ popularity, or perhaps there’s a rivalry element to this finding. All of these team’s top rivals (Yankees for the Red Sox, Cardinals for the Cubs and Giants for the Dodgers) were fairly popular in their own right, and each fan base listed the rival as their least favorite team.

Of course, I don’t think any of these disliked teams in each region are going to be crying about being hated. Ownerships don’t care whether you watch a team to root for or against it — they just care that you watch. Each region’s most and second-most disliked team also ranks among the top 10 in MLB attendance this season. As Oscar Wilde wrote, “There is only one thing in the world worse than being talked about, and that is not being talked about.”

Still, these numbers suggest you shouldn’t mistake notoriety or ticket sales for being well-liked. In some cases, well-known teams (like the Cubs) are also well-liked — but in others (cough, Yankees), these teams can be better described as “notorious.”

Check out our latest MLB predictions.

If Hillary Clinton Had Won

Jul. 20th, 2017 03:01 pm
[syndicated profile] 538_feed

Posted by Nate Silver

What’s different – and what’s the same – in a world where the 2016 election went the other way? Hillary Clinton, Donald Trump, 2016 Election

Greetings, citizens of Earth 1! I’m filing this dispatch from Earth 2, where Hillary Clinton got just a few more votes last November than she did in your world. And I really do mean just a few more: On Earth 2, Clinton won an additional 0.5 percent more of the vote each state, and Donald Trump won 0.5 percent less. That was just enough for her to narrowly win three states – Wisconsin, Pennsylvania and Michigan – that she narrowly lost in what you think of as “the real world.” Races for Congress turned out exactly the same here on Earth 2, so Clinton is president with a Republican Congress.

Things are really different on Earth 2! Merrick Garland is on the Supreme Court instead of Neil Gorsuch. Clinton didn’t enact a “travel ban.” The United States didn’t withdraw from the Paris climate accord. Kellyanne Conway has a CNN show.

[syndicated profile] 538_feed

Posted by Walt Hickey

You’re reading Significant Digits, a daily digest of the numbers tucked inside the news.


12 percent

It’s summer, and how we use energy to cool our homes is of paramount interest to the U.S. Energy Information Agency. All told, 87 percent of homes use air conditioning, and 65 percent of homes use central air conditioning. Now here’s where things get interesting: Most of those central air homes have a programmable thermostat, an essential tool for efficiently using energy. Still, two-thirds of those homes with a programmable thermostat don’t bother using it. All told, the percentage of U.S. homes that have central air, have a thermostat that can program the use of that central air, and actually use the thermostat — ideal energy consumers — represent only 12 percent of homes. [EIA]


45 percent

Percentage of Trump voters who believed that Donald Trump Jr. met with a Russian lawyer during the election, which is very low given that Donald Trump Jr. personally tweeted out the emails detailing the planning of the meeting, discussed attending the meeting at length in an interview with Sean Hannity, and also that the president discussed the meeting which definitely happened, in print yesterday. [The Independent]


74 percent

Placebos work, and faking a surgery may be the greatest placebo of all. A 2014 knee pain study comparing elective surgical procedures and sham surgeries — fear not it was all on the level people knew what they were potentially getting into — found that faking a surgery (doing all the fasting and knocking them out and fake incisions and the whole routine) provided some benefit to the ailment in 74 percent of cases and was just as effective as the actual elective surgery roughly half the time. [FiveThirtyEight]


32 million

A CBO score of the latest GOP bill to repeal the Affordable Care Act found it would leave 32 million more people uninsured by 2026 compared to current law. This straight repeal, no replace legislation compounds the uninsured more than other recent plans and could see a vote next week. [CNN Money]


$216 million

Back in March the New York Power Authority was reportedly presented with a preliminary plan from the Cuomo administration that would fund a light show on a bunch of bridges by raiding $216 million from the MTA. Now that New York City is bordering on open rebellion with the wretched state of the MTA’s subway system during the “summer of hell,” the Cuomo administration is reportedly distancing itself from the light-show plan. [POLITICO Pro]


6,400 million metric tons

Amount of plastic that has become waste since we started doing stuff with plastic in the fifties. So far 9 percent of that was recycled, 12 percent was incinerate, and 79 is just doing it’s own thing in nature or a landfill. Earth made as much plastic in the past 13 years as it did in the rest of human history. [The Atlantic]


If you see a significant digit in the wild, send it to @WaltHickey.

[syndicated profile] 538_feed

Posted by Laura Hudson

Whether it’s done consciously or subconsciously, racial discrimination continues to have a serious, measurable impact on the choices our society makes about criminal justice, law enforcement, hiring and financial lending. It might be tempting, then, to feel encouraged as more and more companies and government agencies turn to seemingly dispassionate technologies for help with some of these complicated decisions, which are often influenced by bias. Rather than relying on human judgment alone, organizations are increasingly asking algorithms to weigh in on questions that have profound social ramifications, like whether to recruit someone for a job, give them a loan, identify them as a suspect in a crime, send them to prison or grant them parole.

But an increasing body of research and criticism suggests that algorithms and artificial intelligence aren’t necessarily a panacea for ending prejudice, and they can have disproportionate impacts on groups that are already socially disadvantaged, particularly people of color. Instead of offering a workaround for human biases, the tools we designed to help us predict the future may be dooming us to repeat the past by replicating and even amplifying societal inequalities that already exist.

These data-fueled predictive technologies aren’t going away anytime soon. So how can we address the potential for discrimination in incredibly complex tools that have already quietly embedded themselves in our lives and in some of the most powerful institutions in the country?

In 2014, a report from the Obama White House warned that automated decision-making “raises difficult questions about how to ensure that discriminatory effects resulting from automated decision processes, whether intended or not, can be detected, measured, and redressed.”

Over the last several years, a growing number of experts have been trying to answer those questions by starting conversations, developing best practices and principles of accountability, and exploring solutions for the complex and insidious problem of algorithmic bias.

 

Thinking critically about the data matters

Although AI decision-making is often regarded as inherently objective, the data and processes that inform it can invisibly bake inequality into systems that are intended to be equitable. Avoiding that bias requires an understanding of both very complex technology and very complex social issues.

Consider COMPAS, a widely used algorithm that assesses whether defendants and convicts are likely to commit crimes in the future. The risk scores it generates are used throughout the criminal justice system to help make sentencing, bail and parole decisions.

At first glance, COMPAS appears fair: White and black defendants given higher risk scores tended to reoffend at roughly the same rate. But an analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend — and potentially treated more harshly by the criminal justice system as a result. On the other hand, white defendants who committed a new crime in the two years after their COMPAS assessment were twice as likely as black defendants to have been mislabeled as low-risk. (COMPAS developer Northpointe — which recently rebranded as Equivant — issued a rebuttal in response to the ProPublica analysis; ProPublica, in turn, issued a counter-rebuttal.)

“Northpointe answers the question of how accurate it is for white people and black people,” said Cathy O’Neil, a data scientist who wrote the National Book Award-nominated “Weapons of Math Destruction,” “but it does not ask or care about the question of how inaccurate it is for white people and black people: How many times are you mislabeling somebody as high-risk?”

An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system.

“If you’re looking at how many convictions a person has and taking that as a neutral variable — well, that’s not a neutral variable,” said Ifeoma Ajunwa, a law professor who has testified before the Equal Employment Opportunity Commission on the implications of big data. “The criminal justice system has been shown to have systematic racial biases.”

Black people are arrested more often than whites, even when they commit crimes at the same rates. Black people are also sentenced more harshly and are more likely to searched or arrested during a traffic stop. That’s context that could be lost on an algorithm (or an engineer) taking those numbers at face value.

“The focus on accuracy implies that the algorithm is searching for a true pattern, but we don’t really know if the algorithm is in fact finding a pattern that’s true of the population at large or just something it sees in its data,” said Suresh Venkatasubramanian, a computing professor at the University of Utah who studies algorithmic fairness.

Biased data can create feedback loops that function like a sort of algorithmic confirmation bias, where the system finds what it expects to find rather than what is objectively there.

“Part of the problem is that people trained as data scientists who build models and work with data aren’t well connected to civil rights advocates a lot of the time,” said Aaron Rieke of Upturn, a technology consulting firm that works with civil rights and consumer groups. “What I worry most about isn’t companies setting out to racially discriminate. I worry far more about companies that aren’t thinking critically about the way that they might reinforce bias by the source of data they use.”

 

Understanding what we need to fix

There are similar concerns about algorithmic bias in facial-recognition technology, which already has a far broader impact than most people realize: Over 117 million American adults have had their images entered into a law-enforcement agency’s face-recognition database, often without their consent or knowledge, and the technology remains largely unregulated.

A 2012 paper, which was coauthored by a technologist from the FBI, found that the facial-recognition algorithms it studied were less accurate when identifying the faces of black people, along with women and adults under 30. A key finding of a 2016 study by the Georgetown Center on Privacy and Technology, which examined 15,000 pages of documentation, was that “police face recognition will disproportionately affect African Americans.” (The study also provided models for policy and legislation that could be used to regulate the technology on both federal and state levels.)

Some critics suggest that the solution to these issues is to simply add more diversity to training sets, but it’s more complicated than that, according to Elke Oberg, the marketing manager at Cognitec, a company whose facial-recognition algorithms have been used by law-enforcement agencies in California, Maryland, Michigan and Pennsylvania.

“Unfortunately, it is impossible to make any absolute statements [about facial-recognition technology],” Oberg said. “Any measurements on face-recognition performance depends on the diversity of the images within the database, as well as their quality and quantity.”

Jonathan Frankle, a former staff technologist for the Georgetown University Law Center who has experimented with facial-recognition algorithms, can run through a laundry list of factors that may contribute to the uneven success rates of the many systems currently in use, including the difficulty some systems have in detecting facial landmarks on darker skin, the lack of good training sets available, the complex nature of learning algorithms themselves, and the lack of research on the issue. “If it were just about putting more black people in a training set, it would be a very easy fix. But it’s inherently more complicated than that.”

He thinks further study is crucial to finding solutions, and that the research is years behind the way facial recognition is already being used. “We don’t even fully know what the problems are that we need to fix, which is terrifying and should give any researcher pause,” Frankle said.

 

The government could step in

New laws and better government regulation could be a powerful tool in reforming how companies and government agencies use AI to make decisions.

Last year, the European Union passed a law called the General Data Protection Regulation, which includes numerous restrictions on the automated processing of personal data and requires transparency about “the logic involved” in those systems. Similar federal regulation does not appear to be forthcoming in the U.S. — the FCC and Congress are pushing to either stall or dismantle federal data-privacy protections — though some states, including Illinois and Texas, have passed their own biometric privacy laws to protect the type of personal data often used by algorithmic decision-making tools.

However, existing federal laws do protect against certain types of discrimination — particularly in areas like hiring, housing and credit — though they haven’t been updated to address the way new technologies intersect with old prejudices.

“If we’re using a predictive sentencing algorithm where we can’t interrogate the factors that it is using, or a credit scoring algorithm that can’t tell you why you were denied credit — that’s a place where good regulation is essential, [because] these are civil rights issues,” said Frankle. “The government should be stepping in.”

 

Transparency and accountability

Another key area where the government could be of use: pushing for more transparency about how these influential predictive tools reach their decisions.

“The only people who have access to that are the people who build them. Even the police don’t have access to those algorithms,” O’Neil said. “We’re handing over the decision of how to police our streets to people who won’t tell us how they do it.”

Frustrated by the lack of transparency in the field, O’Neil started a company to help take a peek inside. Her consultancy conducts algorithmic audits and risk assessments, and it is currently working on a manual for data scientists who want “to do data science right.”

Complicating any push toward greater transparency is the rise of machine learning systems, which are increasingly involved in decisions around hiring, financial lending and policing. Sometimes described as “black boxes,” these predictive models are so complex that even the people who create them can’t always tell how they arrive at their conclusions.

“A lot of these algorithmic systems rely on neural networks which aren’t really that transparent,” said Professor Alvaro Bedoya, the executive director of the Center on Privacy and Technology at Georgetown Law. “You can’t look under the hood, because there’s no such thing as looking under the hood.” In these cases, Bedoya said, it’s important to examine whether the system’s results affect different groups in different ways. “Increasingly, people are calling for algorithmic accountability,” instead of insight into the code, “to do rigorous testing of these systems and their outputs, to see if the outputs are biased.”

 

What does ‘fairness’ mean?

Once we move beyond the technical discussions about how to address algorithmic bias, there’s another tricky debate to be had: How are we teaching algorithms to value accuracy and fairness? And what do we decide “accuracy” and “fairness” mean? If we want an algorithm to be more accurate, what kind of accuracy do we decide is most important? If we want it to be more fair, whom are we most concerned with treating fairly?

For example, is it more unfair for an algorithm like COMPAS to mislabel someone as high-risk and unfairly penalize them more harshly, or to mislabel someone as low-risk and potentially make it easier for them to commit another crime? AURA, an algorithmic tool used in Los Angeles to help identify victims of child abuse, faces a similarly thorny dilemma: When the evidence is unclear, how should an automated system weigh the harm of accidentally taking a child away from parents who are not abusive against the harm of unwittingly leaving a child in an abusive situation?

“In some cases, the most accurate prediction may not be the most socially desirable one, even if the data is unbiased, which is a huge assumption — and it’s often not,” Rieke said.

Advocates say the first step is to start demanding that the institutions using these tools make deliberate choices about the moral decisions embedded in their systems, rather than shifting responsibility to the faux neutrality of data and technology.

“It can’t be a technological solution alone,” Ajunwa said. “It all goes back to having an element of human discretion and not thinking that all tough questions can be answered by technology.”

Others suggest that human decision-making is so prone to cognitive bias that data-driven tools might be the only way to counteract it, assuming we can learn to build them better: by being conscientious, by being transparent and by candidly facing the biases of the past and present in hopes of not coding them into our future.

“Algorithms only repeat our past, so they don’t have the moral innovation to try and improve our lives or our society,” O’Neil said. “But long as our society is itself imperfect, we are going to have to adjust something to remove the discrimination. I am not a proponent of going back to purely human decision-making because humans aren’t great. … I do think algorithms have the potential for doing better than us.” She pauses for a moment. “I might change my mind if you ask me in five years, though.”

[syndicated profile] 538_feed

Posted by Walt Hickey, Nate Silver, Christine Laskowski and Tony Chow

The margarita is one of those rare iconic cocktails that have a half-dozen recipes that can each lay claim to being the best, which poses some problems for your everyday margarita drinker. If you turn to the internet for help, you’ll find hundreds of recipes, and it can be tough to tell which ones are worth your while. The overabundance of choice means a person can find essentially any permutation of tequila, orange liqueur and lime billing itself as a marg.

We wanted to find the best margarita recipe, so we pulled 78 of the internet’s suggestions — we ignored all those lamentable sour-mix concoctions and coconut-pomegranate-passionfruit abominations, focusing just on the beverages anchored by tequila, lime juice and orange liqueur. Then we applied something called a k-means clustering algorithm to determine the four main types of margaritas. Taking a mere average would have resulted in a monstrosity of a drink that was trying to be several things at once, but the clustering algorithm gives us several distinct platonic ideals of a margarita, letting our human taster determine which one is best.

In the video above, you can see us head to Dutch Kills bar in lovely Queens, New York, to test those recipes and figure out which is the best of the bunch. Here are your contenders.

The Classic
1 1/2 oz. tequila
3/4 oz. orange liqueur
3/4 oz. lime juice

The Tequila-Forward
1 1/2 oz. tequila
1/4 oz. orange liqueur
3/4 oz. lime juice
1/4 oz. agave nectar

The Sweet & Easy
1 1/2 oz. tequila
3/4 oz. orange liqueur
3/4 oz. lime juice
1/2 oz. agave nectar
1/2 oz. water
1/4 oz. lemon juice

The Limey & Tart
1 1/2 oz. tequila
1/4 oz. orange liqueur
1 1/4 oz. lime juice
1/2 oz. simple syrup

Watch the video to see which recipe won!

[syndicated profile] 538_feed

Posted by Neil Paine

With first-time winners taking each of the last seven major titles, you might not think experience counts for much in golf anymore. But as the world’s top players head to Royal Birkdale for this week’s (British) Open, it serves as a reminder that the unique challenges of links-style courses still provide at least one championship showcase for golf’s greybeards.

Traditionally speaking, championship golfers do the bulk of their winning in their late 20s and early-to-mid 30s: Since 2000, about 60 percent of major winners were age 32 or younger at the time of their victory. But the big exception seems to be the British Open, whose champs are consistently much older than those of the other majors. Of the five major wins by the 40-and-older set since 2000, only one of them didn’t come at The Open (Vijay Singh’s 2004 PGA Championship win). According to ESPN’s Stats & Information Group, the average age for British Open winners since 2000 was 33.7, while the average age for all other major champs was 30.7.

And the results have been even more extreme in recent years, with four 40-somethings winning the Claret Jug this decade,13 and this doesn’t even count Zach Johnson (who won The Open at age 39 in 2015), nor does it reflect the heroic near-misses from old-timers this century, such as 59-year-old Tom Watson’s playoff loss at Turnberry in 2009 and 53-year-old Greg Norman’s third-place finish in 2008 — the last time Birkdale hosted the event. Since 2011, the average age for The Open winners has been 38.5, nearly 10 years older than the average of the other three majors (28.7), according to ESPN Stats & Information.

So why do older players excel at the British? One reason might be in the way experience helps players deal with the ever-changing weather conditions that often beset The Open — and how those same atmospheric effects negate the advantages of long-hitting younger players.

To test this theory, I looked at data provided by Stats & Info for the first two rounds of each British Open since 1983. For players who have birthdate information in the database,14 I broke them down into the following categories: “Young” (ages 28 or below — the youngest 25 percent of players), “Old” (ages 39 or older — the oldest 25 percent of players) and “Regular” (everyone else). I also recorded whether the average score for a given round was more than three strokes over par, considering such rounds to have “high-scoring” conditions. This is admittedly an imperfect proxy for weather effects, but in the absence of tee times and climate data, it will have to do as a means of flagging rounds where conditions were challenging.

When scoring conditions were normal, old and young players shot equally well relative to the field average. (Players who fit neither category shot about a third of a stroke better on average, which makes sense given those players were in the primes of their careers.) But when conditions got bad, the young players shot worse — and the older ones shot better. In high-scoring rounds, young players lost about a third of a stroke per round relative to older players, an even bigger margin than the quarter-stroke they lost relative to prime-aged players.15

When the going gets rough, the old get going

First- and second-round scores vs. field average under normal and high-scoring conditions in the British Open, by age, 1983-2016

NORMAL ROUND HIGH-SCORING ROUND
PLAYER AGE ROUNDS SCORE VS AVG. ROUNDS SCORE VS AVG. DIFF.
Old (39 and older) 1,616 -0.11 730 -0.28 -0.17
Regular (29–38) 2,765 -0.47 1,153 -0.58 -0.11
Young (28 and younger) 1,408 -0.12 726 0.00 +0.12

A “high-scoring round” is one in which the field average was more than three strokes over par.

Source: ESPN Stats & information group

Open weather can infamously turn on a dime, and it requires shots of a very different shape than the usual ones many younger Americans have spent the vast majority of their careers playing. So at least in part, this is evidence that experience — and not raw power — can help a player better navigate around such challenging conditions.

And that shouldn’t be any different this time around, with typical rainy, gusty weather on the forecast for Royal Birkdale. So although this has been a great season for young players on the PGA Tour, don’t be surprised if the sport’s elder statesmen take center stage in England this week.

Surgery Is One Hell Of A Placebo

Jul. 19th, 2017 05:22 pm
[syndicated profile] 538_feed

Posted by Christie Aschwanden

The guy’s desperate. The pain in his knee has made it impossible to play basketball or walk down stairs. In search of a cure, he makes a journey to a healing place, where he’ll undergo a fasting rite, don ceremonial garb, ingest mind-altering substances and be anointed with liquids before a masked healer takes him through a physical ritual intended to vanquish his pain.

Seen through different eyes, the process of modern surgery may look more more spiritual than scientific, said orthopedic surgeon Stuart Green, a professor at the University of California, Irvine. Our hypothetical patient is undergoing arthroscopic knee surgery, and the rituals he’ll participate in — fasting, wearing a hospital gown, undergoing anesthesia, having his surgical site prepared with an iodine solution, and giving himself over to a masked surgeon — foster an expectation that the procedure will provide relief, Green said.

These expectations matter, and we know they matter because of a bizarre research technique called sham surgery. In these fake operations, patients are led to believe that they are having a real surgical procedure — they’re taken through all the regular pre- and post- surgical rituals, from fasting to anesthesia to incisions made in their skin to look like the genuine operation occurred — but the doctor does not actually perform the surgery. If the patient is awake during the “procedure,” the doctor mimics the sounds and sensations of the true surgery, and the patient may be shown a video of someone else’s procedure as if it were his own.

Sham surgeries may sound unethical, but they’re done with participants’ consent and in pursuit of an important question: Does the surgical procedure under consideration really work? In a surprising number of cases, the answer is no.

A 2014 review of 53 trials that compared elective surgical procedures to placebos found that sham surgeries provided some benefit in 74 percent of the trials and worked as well as the real deal in about half.16 Consider the middle-aged guy going in for surgery to treat his knee pain. Arthroscopic knee surgery has been a common orthopedic procedure in the United States, with about 692,000 of them performed in 2010,17 but the procedure has proven no better than a sham when done to address degenerative wear and tear, particularly on the meniscus.18

Meniscus repair is only one commonly performed orthopedic surgery that has failed to produce better results than a sham surgery. A back operation called vertebroplasty (done to treat compression fractures in the spine) and something called intradiscal electrothermal therapy, a “minimally invasive” treatment for herniated disks and low back pain, have also produced study results that suggest they may be no more effective than a sham at reducing pain in the long term.

Such findings show that these procedures don’t work as promised, but they also indicate that there’s something powerful about believing that you’re having surgery and that it will fix what ails you. Green hypothesizes that a surgery’s placebo effect is proportional to the elaborateness of the rituals surrounding it, the surgeon’s expressed confidence and enthusiasm for the procedure, and a patient’s belief that it will help.

Weirdly enough, surgery’s invasiveness may explain some of its potency. Studies have shown that invasive procedures produce a stronger placebo effect than non-invasive ones, said researcher Jonas Bloch Thorlund of the University of Southern Denmark. A pill can provoke a placebo effect, but an injection produces an even stronger one. Cutting into someone appears to be more powerful still.

Even without a robust placebo effect, an ineffective surgery may seem helpful. Chronic pain often peaks and wanes, which means that if a patient sought treatment when the pain was at its worst, the improvement of symptoms after surgery could be the result of a condition’s natural course, rather than the treatment. That softening of symptoms from an extreme measure of pain is an example of the statistical concept of regression to the mean.

And then there’s what Thorlund calls “car repair” logic — something looks broken, so you try to fix it. A patient comes in with knee pain, and an X-ray or MRI exam shows a tear in the meniscus. The tendency is to assume that the torn meniscus is the cause of the pain and so should be fixed. However, studies show that MRIs can find all kinds of “abnormalities,” such as cartilage damage, even among people without knee pain. One such study looked at the MRI scans of more than 300 knees and found no direct link between meniscus damage and pain. “You can have a meniscal tear without having any problems,” Thorlund said.

Back pain follows a similar pattern. Studies that examined MRIs of people’s backs show that things like slipped, bulging or herniated disks correlate very poorly with pain. Herniated disks and other supposed abnormalities are also common in people without back pain, and it’s telling that studies find that spinal fusion, another popular back surgery used to address disk problems, does not produce better results than nonsurgical interventions.

Given these results, why do these surgeries remain so widespread? Because the ineffectiveness of these procedures can be hard for doctors to see. “Largely, surgeons believe that they are doing the right thing,” writes surgeon Ian Harris in his book “Surgery, the Ultimate Placebo.” Yet in many cases, “the real benefit from surgery is lower and the risks are higher than you or your surgeon think,” he writes. It’s not always a matter of surgeons ignoring the evidence. In some cases, there’s simply a lack of high-quality studies, and that “allows surgeons to do procedures that have always been done, those that their mentors taught them to do, to do what they think works, and to simply do what everyone else is doing,” Harris writes.

Surgeons who perform only real surgeries never see the benefits of sham procedures and so may falsely attribute their patients’ success to the surgery without recognizing that regression to the mean and the placebo effect might also contribute. Patients can also be fooled, Green said, recalling how in one arthroscopic surgery experiment, patients in the placebo group improved so much that they were “flabbergasted” to learn that they’d received the sham treatment.

Could the placebo effect be harnessed for good in the same way that some researchers have used placebo pills to treat ADHD and irritable bowel syndrome? When I posed that question to Thorlund, his answer was a resounding no. Even sham surgery could pose the risk of serious, life-threatening complications. “I don’t think it’s ethical,” he said.

Batting Average Is So 19th Century

Jul. 19th, 2017 02:38 pm
[syndicated profile] 538_feed
 

Welcome to the latest episode of Hot Takedown, FiveThirtyEight’s sports podcast. The Hot Takedown crew is out of the office this week, so we’re bringing you a special blast from the past: our very first Stat School. In this episode, Neil dons the Stat Man cape, explaining the three ways to measure batting in order of increasing complexity: batting average, OPS (on-base plus slugging), and wRC+ (weighted runs created plus). Will Chad and Kate absorb all of his information and receive their certificates at the end of the course?

Here are links to the things we discussed during the show:

[syndicated profile] 538_feed

Posted by Walt Hickey

You’re reading Significant Digits, a daily digest of the numbers tucked inside the news.


0.64 percent

A study from Ball State University reports that a quarter of U.S. jobs are vulnerable to offshoring, but the stunning statistic is an estimate that half of net business formation in the United States since the recession has happened in 0.64 percent of U.S. counties. That’s 20 of the 3,100 U.S. counties. [U.S. News & World Report, CBER]


8 percent

Several people reportedly got sick after eating at the same Chipotle in Sterling, Va. This could not be worse news for the company that has seen previous outbreaks of food-borne illness. Chipotle’s stock price plunged 8 percent on Tuesday over a matter of hours, wiping out all its gains for the year. [Bloomberg]


28.5 percent

Percent of female comic book characters with a gendered name that had a diminutive in it — think names like “Wonder Girl,” “Wonder Lass” or “Wonder Doll” rather than “Wonder Woman” — compared to 12.6 percent of male characters with a gendered name. [The Pudding]


8 Republicans

With the support of seven Assembly Republicans and one Senate Republican, California’s Democratic-controlled legislature successfully passed Assembly Bill 398, which extends the state’s cap-and-trade program. [The Los Angeles Times]


HB 640

New Hampshire Gov. Chris Sununu signed HB 640 and made New Hampshire the 22nd state to decriminalize marijuana. The “live free or die” state was slower than the likes of Mississippi and Nebraska in making this change. [Cannabis Now]


$2.3 trillion

Annual U.S. health spending on chronic physical and mental conditions such as diabetes and high blood pressure. Controlling those illnesses — and staving off the deleterious and far more expensive health maladies that they can lead to — could save the U.S. health care system a lot of money. [U.S. News & World Report]


If you see a significant digit in the wild, send it to @WaltHickey.

[syndicated profile] 538_feed

Posted by Perry Bacon Jr.


So you wake up one day, get on Twitter and find everyone buzzing about some story in The New York Times or The Washington Post about some associate or friend of President Trump’s who has some connection to Russia. You read the story, but not only are the sources unnamed, they are unnamed in all kinds of different ways — an “intelligence source” in paragraph three, “administration officials” in paragraph seven, “people familiar with the investigation” in the next one and “law enforcement officials” at the end. You understand all the words on the screen, but you don’t really understand who’s telling you what or why.

In the first part of our guide to unnamed sources, we laid out some general tips for making sense of these kinds of stories. In this part, we want to get more specific, to help you to essentially decode these stories. We also want you to be able to know which stories you should rely on based on the different kinds of sourcing used.

So we’re going to divide anonymous sources into six general types and give the pros and cons of each, in terms of reliability. We ordered the types of unnamed sources, roughly speaking, from most reliable to least reliable (at least in my experience):

1. Organization sources

White House officials,” “Justice Department officials,” “Pentagon officials,” “Clinton campaign officials,” “Republican leadership aides

Why you should trust these sources: Close to 70,000 people work at the State Department, so there’s a huge number of potential “State Department officials” to be quoted anonymously. But in reality, most beat reporters aren’t talking to people up and down a department at every level. A story attributed to a large federal department and published in The Washington Post will almost certainly have been run by the department’s spokesperson, giving him or her the chance to rebut it. If a story includes a line like “State Department officials said X” but no spokesperson is directly quoted in the story, you should generally assume that this is a disclosure authorized by the top officials in that agency. Maybe the State Department wants the secretary, Rex Tillerson, and not a spokesperson to announce a policy publicly, so the members of the press team opt to confirm the story but not use their names. An unnamed source isn’t always a whistleblower or someone talking behind the boss’s back.

Be wary, however, of putting too much trust in adjectives such as “senior” or “high-ranking” when applied to a source. These are organizational sources, sure. But there is no technical definition of “senior White House official,” so this person could be press secretary Sean Spicer or Trump himself.

Why you shouldn’t trust these sources: Sometimes departments want to float ideas that a spokesperson would not want to put his or her name behind. CBS News, for example, ran a story in May in which unnamed White House officials were quoted calling the leaks about the various Russia controversies “coordinated and timed” to hurt Trump. Trump White House aides may think that is true. But suggesting that leaks and stories about Trump and Russia are somehow coordinated and timed by sources and journalists, as opposed to going through the normal process — sources giving journalists tips, reporters trying to verify them and then putting out stories after confirming the information — sounds a bit conspiratorial. Going unnamed allows these sources to bash the Russia coverage in a way that White House aides might not be comfortable doing with their names attached.

And as I mentioned in the first part of this series, outlets aimed at politicos — such as Axios and, well, Politico — frequently publish claims that the administration will do X or Y. Often, these are trial balloons — the White House or a federal agency wants to see how the press and public react to something — and they never come to pass.

2. “Familiar” people

A person familiar with person X’s thinking,” “sources familiar with person X’s plans,” “associates of person X”

Why you should trust these sources: Quotes attributed to sources “familiar with the thinking” of a person are often quite reliable.

Why? A major newspaper like The New York Times or The Washington Post is not going to suggest that a source is familiar with someone’s thinking without being pretty sure of it. This is a fairly precise term. It also puts the news organization at a clear risk, as person X can obviously deny what an article has said he or she is thinking.

Generally, these kinds of source descriptions mean that the reporter spoke either to the actual subject (meaning that “a source familiar with the thinking of Chief Justice John Roberts” is Roberts) or to a person designated by the subject to give his or her account to the reporter.

In the wake of Trump’s firing of FBI Director James Comey, the “associates of” Comey who gave accounts of his interactions with Trump and his aides to The New York Times and other news outlets were obviously authorized by Comey and essentially telegraphing the story that he would eventually testify to publicly.

Why you shouldn’t trust these sources: By going unnamed or relying on allies, the subject of these stories (say, Comey before his testimony) is unwilling to commit publicly to whatever narrative he or she is telling. So while the broader outlines are likely correct, the narrative could be exaggerated or misleading in some ways.

Secondly, this kind of sourcing has the potential for abuse. Other reporters can call the State Department to check the veracity of a story attributed to “State Department officials.” But it was not obvious that “associates of Comey” would lead to a Columbia law professor named Daniel Richman. (Comey testified that he gave a friend one of his memos describing his interactions with Trump and that the friend, a professor at Columbia law school, read some of the details of the memo to a journalist. Comey did not name the journalist. The first story about the Comey memo was written by Michael Schmidt, a New York Times reporter who has covered Comey extensively.)

Another reporter could contact Comey to confirm this kind of story, which is something, but if Comey refused to talk, there wasn’t a clear second option.

Also, there is one person causing some specific problems with this kind of sourcing: Trump. The president seems to speak with a wide range of people, both inside and outside the White House. And many of these people then tell reporters that they talked to the president. That leaves a lot of people for journalists to credibly say are “familiar with Trump’s thinking,” but that does not necessarily mean that these sources give an accurate picture of what the president will do. The constant stories about staff shake-ups at the White House may indeed come from people who have heard Trump muse about changes that he will never actually follow through on.

 

3. The Law

Law enforcement officials

Why you should trust these sources: In my experience, in national news stories, “law enforcement sources” usually means representatives of the Department of Justice or FBI (technically, the FBI is part of DOJ), making the general principles described in the “organization sources” section above applicable here too. In particular, look for the plural “officials” over the singular “official.”

Why you shouldn’t trust these sources: This kind of sourcing is relatively opaque. The Secret Service, the FBI, the U.S. Capitol Police, the D.C. police department, the U.S. Justice Department and the U.S. attorney’s office in D.C. would all count as law enforcement agencies based in Washington. If you were a reporter trying to check out a story attributed to “law enforcement officials,” you would need to call all these agencies.

And sometimes these agencies disagree with one another. At his Senate hearing, Comey described his discomfort (and disagreement) with the terminology that the previous attorney general, Loretta Lynch, wanted to use when publicly discussing the probe into Hillary Clinton’s use of e-mail as secretary of state. (According to Comey, Lynch wanted to refer to the probe as a “matter,” not an “investigation.”) So a story referring to “law enforcement officials” about the e-mail controversy could have had different takes, depending on whether the sources were aligned with Lynch or Comey.

 

4. The spies

Intelligence officials

Why you should trust these sources: The number of publications with intelligence community reporters is very small. You are unlikely to read a story quoting unnamed intelligence officials outside of the big papers, like the Washington Post, New York Times and Wall Street Journal, and the major television news networks. So the general reliability of those outlets helps gives these stories credibility.

Why you shouldn’t trust these sources: As is the case with law enforcement sources, “intelligence officials” could refer to many agencies in the U.S. government: the FBI, CIA, NSA, the intelligence departments at the Defense and State departments. The U.S. Senate and U.S. House also have intelligence committees with staffs, so those people could also be described as intelligence officials. And some reporters have sources within intelligence agencies in other nations, and they would also fall under this category. So this sourcing is opaque.

Also, even if the intelligence sources are accurately reporting their own views, the intelligence itself could be wrong or overhyped (see weapons of mass destruction in Iraq). And there is very little ability for a reporter to push back. A political reporter can travel to Ohio and look for signs that Hillary Clinton or Donald Trump has a strong organization in the Buckeye State. It is much harder for an intelligence reporter to verify, outside of using his sources, Russia’s hacking efforts, for example.

 

5. Politicians and their staffers

Administration officials,” “congressional sources

Why you should trust these sources: Generally, the term “administration officials” is used by journalists to refer to political appointees. So “Trump administration officials” means people who are aligned with the administration, not just federal workers. When sources ask for this designation, they are often trying to shield their identity more carefully (“State Department officials” narrows down the universe of sources) or may be trying to downplay the role of their department (Treasury, State, etc.).

Why you should not trust these sources: This sourcing is opaque and has potential for errors. A White House employee, for example, could be describing something that he or she expects the State Department to do, and State may not be in line with the White House view. Or vice versa.

And Congress is, technically, a organization, like State or Defense. But Congress is really a body of 535 independent entities loosely aligned under two parties. A source in Senate Majority Leader Mitch McConnell’s office will have different information and a different agenda than one in the office of House Democratic leader Nancy Pelosi. “Congressional sources” is better than nothing, but only barely.

 

6. Sourcing that tells you nothing

People familiar with the investigation,” “U.S. officials briefed on intelligence reports,” “current and former officials familiar with the investigations,” “one current and one former American official with knowledge of the continuing congressional and F.B.I. investigations,” “Republican strategist,” “Democratic strategist,” “senior Republicans

Why you should trust these sources: The first several phrases here come from stories about the interactions that Jared Kushner, Trump’s son-in-law and adviser, has had with various Russian figures. Phrases like “senior Republicans” and “Democratic strategist” come from political coverage.

This style of sourcing has a “just trust us” quality to it, and the descriptions of the sources are essentially meaningless. A “former American official” could be anyone who ever worked in the U.S. government. “People familiar with” the Russia investigation could range from low-level officials at the Department of Justice to former President Barack Obama. One assumes that the lawyers, consultants and others employed by the various people being written about in Trump-Russia stories are “familiar with the investigation.” They could be the sources for some stories.

(The difference between “sources familiar with Comey’s thinking” and “sources familiar with the investigation” is that the former is both more verifiable and more risky for the news outlet. You can contact Comey to check his thinking, and he can call the Times to say if his thinking has been described incorrectly. “Sources familiar with the investigation,” on the other hand, does not put anyone on the spot, and investigators rarely go on the record during an investigation — even to say that published accounts are wrong.)

A “Democratic strategist” could be anyone who worked in any Democratic campaign or on the staff of any Democratic office-holder, at any level of government. This type of sourcing is also often used by people who are not government officials at all, but political consultants.

So, why should you trust these stories? You are truly relying on the reporters and the outlets here — and on their records of reporting verifiable claims. Some publications and journalists have established strong reputations for trustworthiness. The Washington Post reported stories that essentially forced the resignation of Flynn earlier this year. Marty Baron, the Post’s top editor, ran The Boston Globe when it broke the stories about the Catholic Church’s cover-up of sexual abuse of children by priests — coverage featured in the movie “Spotlight.” Earlier this year, outlets investigating Trump/Russia stories may have appeared to be pushing forward an allegation that seemed far-fetched (some kind of direct collusion, coordination or at least general prior knowledge of the Russian hacking effort by the Trump campaign). But at this point, the Flynn firing, the Comey firing, Donald Trump Jr.’s meeting with a Kremlin-associated lawyer and other events have validated the decisions by the Post, Times, CNN and other outlets to invest heavily in reporting on the Trump-Russia connection.

“One thing that I think really needs explaining to non-journalists is the number of people at a newspaper or network who will read an investigative story” before it runs, said Al Cross, who was a longtime reporter at the Louisville Courier-Journal and now teaches at the University of Kentucky’s Institute for Rural Journalism and Community Issues. “News consumers say they get most of their news from television, which emphasizes the individual roles of the anchor and reporter and does little to remind people that journalism is a collective act.”

Why you should not trust these sources: The competition to get scoops among journalists and big papers creates pressure that could lead to the overhyping of certain stories or the use of weak sourcing that leads to inaccuracies.

CNN last month accepted the resignations of three journalists, including a top editor in charge of an investigative unit, after announcing that it could not stand behind a story it had published on the Russia controversy. The article, which has now been removed from CNN’s website, relied on a single unnamed “congressional source” to suggest that Congress was investigating ties between a Russian investment fund and people connected to Trump. One of the Trump allies named in the CNN story, Anthony Scaramucci, publicly denied the account.

“The zeal to break news can create haste that leads to flawed reporting,” wrote the Post’s media reporter, Paul Farhi, in the wake of the CNN resignations. “Like all major news organizations, CNN is under pressure to produce scoops that draw ratings and Web traffic, and to stay competitive with the likes of the New York Times and The Washington Post, which have been leaders on the Trump-Russia story.”

 

Conclusions: Caveat lector

“The whole system of anonymous sources has a flaw,” said Jay Rosen, a journalism professor at New York University. “Sometimes the name that is withheld is bigger news than the news the withheld name is offering. But there is no way for the readers to know because the name is … withheld.”

Rosen is right. But as a reader, you don’t have any other options. Washington stories have always been full of unnamed sources. But now, we are in a unique era: an administration with a lot of factions, often fighting with one another; a federal bureaucracy skeptical of its boss; a Republican majority in Congress leery of Trump but often not wanting to blast him with their names attached. So there are lots of people who want to talk to the press, but also lots of incentives for them to do so without their names attached. Heck, the former FBI director was essentially acting as an unnamed source, so you can imagine that others with fewer credentials (or more to lose) are even more afraid to go on the record.

So our advice is: Read all of these vaguely sourced stories with skepticism. But if you really want to keep up with Trump’s Washington, you probably don’t have a choice but to read some stories with unnamed sources.

drwex: (VNV)
[personal profile] drwex
Yes, I will be posting music entries Real Soon Now, I promise. Probably next week. But first I want to unload some of the stuff in the mental backlog.

I really appreciated all the commentary on the last post. If y'all want to chime in about this one I'd likewise appreciate it. The topic is "Music video WTF" - as in, should I link to videos if I like the song but not the video?

Here, let me give you an example that sits right on the borderline, two videos for "One On One" by Tujamo, with vocals by Sorana. Tujamo is a German producer and EDM spinner; Sorana is an eastern European singer (near as I can guess, Romanian) and this is her first big team-up with a "name" producer. So, OK, great. It's a fun tune and I like her voice, though as with a lot of these things I think it's over-tuned.

First up, the official video for the song:
https://www.youtube.com/watch?v=Y19FzsqM1as

Minor warning: it's a PoV video done in the style of a lot of porn these days where you, the viewer, are invited to have the gaze of the (male) camera in intimate interactions with a small, very conventionally attractive woman through a series of scenes, including bedroom. There's nothing actually X-rated about this, but I was uncomfortable watching it. In case that gaze isn't intimate enough for you, there's even an official 3D-VR version - https://www.youtube.com/watch?v=lx6OeuZ-mLE

Plus side: she's smiling and active throughout. She appears to be not only enjoying the interactions but initiating things. But if voyeurism isn't your kink (it's not mine, at least not for strangers) then you may (like me) find yourself unable to watch this video and see if there are other alternatives. Here's one:
https://www.youtube.com/watch?v=8gVZnnxvf38

At least that's just a static conventionally-attractive-skinny-chick-half-dressed-in-provocative-pose. You see that kind of thing selling pretty much any product under the sun everywhere in the industrialized world. But, seriously, what does this have to do with the music?

I usually try to link to SoundCloud for my music choices but lots of things aren't up there and are on YouTube or other visual media.

So, dear readers, what do you make of this? Would you rather I didn't blog video music that sets me off, or blog it with information so you can judge for yourselves?
[syndicated profile] 538_feed

Posted by Neil Paine

For most Major League Baseball teams, the trade deadline is a chance to step back and take stock of the franchise’s trajectory. Although only a small fraction of rumored deals actually end up happening, a team’s willingness to swap assets — as either a buyer or a seller — says a lot about where it is in the cycle between contending for a World Series and playing for the future.

For a few teams, the choice has already been made. These are the clubs on the ends of the baseball spectrum: the bottom dwellers already committed to punting the present in order to stockpile young talent and the clear front-runners who can begin fine-tuning their playoff rosters in July.

But the bulk of the league faces a fork in the road and doesn’t have the luxury of soul-searching with the trade deadline less than two weeks away. The decision to buy or sell is both critical — botched maneuvers can cripple a franchise for years — and further complicated by whether teams are getting a “rental” player (with an expiring contract) or someone who can help them for the next few years. But fear not, baseball general managers, we are here to help.

A few years ago, my colleague Nate Silver and I developed a statistical framework for trade-deadline strategy: the Doyle Number (named for a certain pitcher the Detroit Tigers mortgaged their future to acquire at the 1987 deadline). Doyle represents the number of future wins a team should be willing to part with in exchange for adding an extra win of talent this season. So a Doyle of 1.00 means a team should be indifferent to buying or selling — a one-win improvement this year adds as much to its current World Series odds as a future win would add over the long term.19 If its Doyle rises any higher, it should probably be buying (since wins this year are more valuable than future wins); any lower, and it should be selling.

For example, the Cleveland Indians currently have a Doyle Number of 1.48. With a good (though not quite great) roster and decent (but not quite ironclad) division-series odds,20 they should probably be trying to add talent over the next few weeks to bolster their chances of returning to the World Series. Meanwhile, the New York Mets’ Doyle is 0.08; their injury-riddled talent base is mediocre, and they have very little shot at the division series, so they should be selling off anyone that isn’t nailed down.

With those ground rules in place, here’s every team’s Doyle number as of July 16:21

Where each team stands at the deadline

Teams ranked by Doyle Number — how many future wins of talent a team should trade away to acquire 1 win this season

SOLID BUYERS ELO RATING EXP. WINS PER 162 GAMES DIV. SERIES ODDS WORLD SERIES ODDS DOYLE NUMBER
Dodgers 1598 101.8 98.5% 26.5% 2.2
Astros 1591 100.3 99.7 24.6 2.2
Nationals 1551 91.5 93.8 12.8 1.9
Red Sox 1544 89.8 69.2 8.3 1.6
Indians 1544 89.8 57.4 6.9 1.5
CAUTIOUS BUYERS ELO RATING EXP. WINS PER 162 GAMES DIV. SERIES ODDS WORLD SERIES ODDS DOYLE NUMBER
Brewers 1512 82.8 64.2% 4.3% 1.3
Diamondbacks 1520 84.6 41.8 3.3 1.0
Yankees 1531 86.9 34.1 3.3 0.9
Rays 1515 83.5 35.7 2.5 0.9
Cubs 1535 87.8 22.0 2.3 0.7
Rockies 1504 80.9 27.7 1.5 0.6
SELLERS ELO RATING EXP. WINS PER 162 GAMES DIV. SERIES ODDS WORLD SERIES ODDS DOYLE NUMBER
Rangers 1525 85.7 14.7% 1.3% 0.4
Royals 1495 78.9 15.3 0.7 0.3
Twins 1475 74.5 18.7 0.5 0.3
Cardinals 1507 81.6 10.2 0.6 0.3
Mariners 1513 83.1 9.3 0.6 0.3
Angels 1502 80.4 5.2 0.3 0.1
Pirates 1496 79.2 5.5 0.3 0.1
Braves 1478 75.3 5.6 0.2 0.1
Blue Jays 1494 78.8 4.3 0.2 0.1
Mets 1503 80.9 3.2 0.2 0.1
Tigers 1489 77.6 3.3 0.1 0.1
Orioles 1474 74.3 3.1 0.1 0.1
Marlins 1494 78.7 2.5 0.1 0.1
Athletics 1478 75.2 1.3 0.0 0.0
White Sox 1467 72.7 1.1 0.0 0.0
Reds 1463 71.8 0.7 0.0 0.0
Padres 1444 67.6 0.2 0.0 0.0
Giants 1475 74.5 0.0 0.0 0.0
Phillies 1433 65.2 0.0 0.0 0.0

Expected wins are derived from the team’s current Elo rating.

Source: FanGraphs

The Doyle topples one of the most common perceptions of the deadline: The team most in need of a trade is the team that is one bat (or one arm) away from making a postseason run. By contrast, Doyle shows that the the teams who should be most willing to buy are the teams having the best seasons — not teams merely on the cusp of the playoffs. It’s a consequence of how random the MLB playoffs are: When even the best teams have long odds of winning, there’s practically no amount of talent a team can add that will cause its World Series probability to hit diminishing returns.

This year, the top Doyle teams are the historically dominant Los Angeles Dodgers and Houston Astros — and, to a lesser extent, the Indians, Washington Nationals and Boston Red Sox. With the possible exception of Houston, each team has at least one position where it can substantially improve, and Doyle indicates they should focus on shoring up those weaknesses in preparation for a World Series run.

More interesting, however, are the clubs near the threshold between buying and selling. These are teams for whom there is less of a clear-cut direction to take — but some decision must be made, since any direction would add more total future championships than merely standing pat. One archetype for that group is the unexpected contender: Think of the Milwaukee Brewers, who find themselves in first place in the National League Central division despite a relatively unimpressive collection of talent. Milwaukee’s 1.26 Doyle suggests it should lean toward buying, since an improved core will become much more valuable in the postseason.

The opposite model might be that of Milwaukee’s division rival, the Chicago Cubs: an expected favorite to whom Doyle gives a disappointingly low World Series probability. The defending champs are having a well-documented down year, and although they’re talented enough to have decent title odds if they make the playoffs, that’s far from guaranteed no matter what deadline moves they make. As a result, their 0.66 Doyle suggests they should lean toward punting on this season.

The Cubs, however, don’t seem willing to give up just yet, trading for starter Jose Quintana last week. They weren’t necessarily wrong to do it, either; it’s important to remember that the Doyle Numbers above mostly apply to rental players. After I tweaked the model to account for the remaining years on Quintana’s contract,22 Chicago’s Doyle for this specific trade became 1.31 — meaning it was probably worth it to give up top prospects in exchange for improving its talent base over multiple seasons.

Those are exactly the kinds of extenuating circumstances a team in Chicago’s current situation needs in order to justify buying instead of selling. Any team with a Doyle north of 0.60 or so could probably do a similar calculation, which means 11 clubs — the Dodgers, Astros, Nationals, Red Sox, Indians, Brewers, Diamondbacks, Yankees, Rays, Cubs and Rockies — could reasonably call themselves buyers this season under the right circumstances.

So we know who’s at the restaurant, and we know who’s on the menu — but what is everyone ordering? We can also use Doyle to build a trade deadline plan for each team, pairing them with players who fit a need and make sense given how realistic a club’s World Series chances are. For each of the 11 teams above, I gathered their current starters23 and tracked how good each is this season, according to Tom Tango’s WARcel projections. I also pulled a list of deadline rental targets24 from the excellent RosterResource.com, calculating their WAR talent as well. Multiplying a team’s Doyle Number by the difference in WAR talent between a rental target and its current starter at the same position, we came up with a “deadline index” that indicates how good of a match the player is for the team. After assigning duplicated targets to the team whose index for the player was highest, here are the best pairings between team needs and available players, according to Doyle:

Doyle’s deadline shopping list

The top targets for each potential buyer based on deadline index, which is the difference in talent between an available ‘rental’ and the team’s current starter at his position, multiplied by the team’s Doyle Number

TOP TARGET CURRENT STARTER FOR TARGET POSITION
TEAM DOYLE PLAYER POS TALENT PLAYER TALENT DEADLINE INDEX
Nationals 1.9 J. Dyson LF +2.7 C. Heisey -0.4 5.8
Astros 2.2 Y. Darvish SP +3.1 M. Fiers +1.1 4.3
Red Sox 1.6 A. Avila C +2.3 C. Vazquez -0.2 4.0
Indians 1.5 J.D. Martinez RF +2.4 T. Naquin +0.3 3.1
Brewers 1.3 Z. Cozart SS +3.1 O. Arcia +1.0 2.7
Dodgers 2.2 A. Reed RP +1.9 P. Baez +0.8 2.3
Yankees 0.9 T. Frazier 1B +2.3 J. Choi +0.1 2.1
D-backs 1.0 C. Granderson LF +2.0 D. Descalso +0.0 2.0
Rays 0.9 C. Maybin LF +1.8 S. Peterson +0.2 1.4
Rockies 0.6 J. Bruce RF +2.0 G. Parra +0.2 1.1
Cubs 0.7 C. Gomez LF +1.6 K. Schwarber +0.0 1.1

Talent is an estimate of a player’s current projected wins above replacement (WAR) per 162 games.

Sources: RosterResource, Baseball-Reference.com, FanGraphs, Tangotiger

Obviously, here are other layers of complexity involved in actually pulling off these deadline deals, including the quality of the trading team’s farm system, which of its existing players might return from injury before the playoffs, and the possibility of a contract extension with the player being acquired. But the general idea of Doyle is that it provides a flexible framework for trade-deadline decisions, based on how valuable it is to add or shed current talent with an eye on the future.

Keep that in mind as we watch whatever deals unfold over the next couple of weeks. A team’s Doyle Number is a rough guideline, the starting point for thinking about trade possibilities. What happens after that is a combination of reading the market, picking the right moment to strike and then making endless phone calls until that forgettable middle reliever is finally yours.

Check out our latest MLB predictions.

June 2015

S M T W T F S
 123456
78 910111213
14151617181920
21222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 23rd, 2017 12:39 pm
Powered by Dreamwidth Studios