144 stories

Eating peanuts prevents peanut allergies

3 Comments and 5 Shares

The results of a major new trial, published in the New England Journal of Medicine, indicate that for children who are at risk of developing a peanut allergy, eating peanuts greatly reduces the chance of an allergy. This is pretty huge news.

All the babies were between 4 and 11 months old when they were enrolled, and all had either an egg allergy, severe eczema, or both-putting them at high risk of a peanut allergy down the road. Indeed, 98 of them were already heading in that direction: They tested positive for mild peanut sensitivity in a skin-prick test. This meant that these babies were already churning out antibodies to the peanut protein. Eating peanuts in the future could set off an allergic reaction.

The team divided the babies into two groups. Half were to avoid eating peanut products until they were 5 years old. The other half received at least 6 grams of peanut protein a week, spread across at least three meals, until they were 5 years old. Bamba was the preferred offering, though picky eaters who rejected it got smooth peanut butter.

Around the 5th birthdays of the trial subjects came the big test. The children consumed a larger peanut portion than they were used to in one sitting, and the results were clear-cut. Among 530 children who had had a negative skin-prick test when they were babies, 14% who avoided peanuts were allergic to them, compared with 2% of those who'd been eating them. In the even higher risk group, the children who were sensitized, 35% of the peanut-avoiders were allergic versus just over 10% of the peanut eaters.

Even if further studies confirm these results, will American parents start feeding their infants peanuts? I don't know...there are lots of similarities to vaccines in play here.

Tags: medicine   science
Read the whole story
4 days ago
Homeopathy! I kid.
4 days ago
Imagine, exposure to things helps the body learn to deal with them. It's as though there might be science involved somewhere.
Arlington, VA
Share this story
1 public comment
4 days ago
Murray, Kentucky

Battery Life vs. Phone Thinness

1 Comment

Christopher Mims, writing for the WSJ:

Survey after survey reveals there is one thing consumers wish manufacturers would change about their gadgets. And year after year, gadget makers make only tepid gestures toward giving it to us.

It’s better battery life. […]

It doesn’t have to be this way. There’s a simple enough solution. It requires a company brave enough to persuade users that one of the things we’ve come to expect from phones and other gadgets — that every year, they become thinner and lighter — is a trend that has outlived its usefulness.

It’ll happen soon. Consider laptops — for years, battery life on a laptop was somewhere around 4 or 5 hours, at best. It was a struggle to use one throughout a cross-country flight. Today, you could probably fly coast to coast roundtrip with MacBook Air on a single charge. But laptops got thinner and lighter before they got better battery life.

Read the whole story
5 days ago
I think phones have gotten minimally thin (until they become usefully bendable), and now we'll see battery life take on increased emphasis.
4 days ago
I agree that phones are thin enough, but I personally hope we see smaller width and height before the battery takes precedence. I know I'm probably a niche, but so monster screens (in my opinion) and with the size of the phone market you can have some very profitable niches.
Share this story

Re-Introducing Deployinator, now as a gem!

1 Comment

If you aren’t familiar with Deployinator, it’s a tool we wrote to deploy code to We deploy code about 40 times per day. This allows us to push smaller changes we are confident about and experiment at a fast rate. Deployinator does a lot of heavy lifting for us. This includes updating source repositories on build machines, minifying/building javascript and css dependencies, kicking off automated tests and updating our staging environment before launching live. But Deployinator doesn’t just deploy, it also manages deploys for a myriad of internal tools, such as our Virtual Machine provisioning system, and can even deploy itself. Within Deployinator, we call each of these independent deployments “stacks”. Deployinator includes a number of helper modules that make writing deployment stacks easy. Our current modules provide helpers for versioning, git operations, and for utilizing DSH. Deployinator works so well for us we thought it best to share. 

Four years ago we open sourced deployinator for OSCON. At the time we created a new project on github with the Etsy related code removed and forked it internally. This diverged and was difficult to maintain for a few reasons. The original public release of Deployinator mixed core and stack related code, creating a tightly coupled codebase; Configuration and code that was committed to our internal fork could not be pushed to public github. Naturally, every internal commit that included private data invariably included changes to the Deployinator core as well. Untangling the public and private bits made merging back into the public fork difficult and over time impossible. If (for educational reasons) you are interested the old code it is still available here.

Today we’d like to announce our re-release of Deployinator as an open source ruby gem.  We built this release with open-source in mind from the start by changing our internal deployinator repository (renamed to DeployinatorStacks for clarity) to include an empty gem created on our public github. Each piece of core deployinator code was then individually untangled and moved into the gem. Since we now depend on the same public Deployinator core we should no longer have problems keeping everything in sync.

While in the process of migrating Deployinator core into the gem it became apparent that we needed a way to hook into common functionality to extend it for our specific implementations. For example, we use graphite to record duration of deploys and the steps within. An example of some of the steps we track are template compilations, javascript and css asset building and rsync times. Since the methods to complete these steps are entirely within the gem, implementing a plugin architecture allows everyone to extend core gem functionality without needing a pull request merged. Our README explains how to create deployment stacks using the gem and includes an example to help you get up and running.

(Example of how deployinator looks with many stacks)

Major Changes

Deployinator now comes bundled with a simple service to tail running deploys logs to the front end. This replaces some overly complicated streaming middleware that was known to have problems. Deploys are now separate unix processes with descriptive proc titles. Before they were hard to discern requests running under your web server. The combination of these two things decouples deploys from the web request allowing uninterrupted flow in the case of network failures or accidental browser closings. Having separate processes also enables operators to monitor and manipulate deploys using traditional command line unix tools like ps and kill.

This gem release also introduces some helpful namespacing. This means we’re doing the right thing now.  In the previous open source release all helper and stack methods were mixed into every deploy stack and view. This caused name collisions and made it hard to share code between deployment stacks. Now helpers are only mixed in when needed and stacks are actual classes extending from a base class.

We think this new release makes Deployinator more intuitive to use and contribute to and encourage everyone interested to try out the new gem. Please submit feedback as github issues and pull requests. The new code is available on our github. Deployinator is at the core of Etsy’s development and deployment model and how it keeps these fast. Bringing you this release embodies our generosity of spirit in engineering principle. If this sort of work interests you, our team is hiring.

Read the whole story
7 days ago
What we learned at Etsy about forking applications after we open source them.
Share this story

A Case Study in Empirical Bayes


Empirical Bayes is a statistical technique that is both powerful and easy to use.  My goal is to illustrate that statement via a case study using eBay data.  Quoting the famous statistician Brad Efron,

Empirical Bayes seems like the wave of the future to me, but it seemed that way 25 years ago and the wave still hasn’t washed in, despite the fact that it is an area of enormous potential importance.

Hopefully this post will be one small step in helping Empirical Bayes to wash in! The case study I’ll present comes from ranking the items that result from a search query. One feature that is useful for ranking items is their historical popularity. On eBay, some items are available in multiple quantities. For these, popularity can be measured by the number of times an item is sold divided by the number of times it is displayed, which I will call sales/impressions (S/I). By the way, everything I say applies to any ratio of counts, not just sales and impressions.

The problem

The problem I want to discuss is what to do if the denominator is small. Suppose that items typically have 1 sale per 100 impressions. Now suppose that a particular item gets a sale just after being listed.  This is a typical item that has a long-term S/I of about 0.01, but by chance it got its sale early, say after the 3rd impression.  So S/I is 1/3, which is huge. It looks like an enormously popular item, until you realize that the denominator I is small: it has received only 3 impressions.  One solution is to pass the problem downstream, and give the ranker both S/I and I. Let the ranker figure out how much to discount S/I when I is small.  Passing the buck might make sense in some situations, but I will show that it’s not necessary, and that it’s possible to pass a meaningful value even when I is small.

How to do that?  Informally, I want a default value of S/I, and I want to gradually move from that default to the actual S/I as I increases. Your first reaction is probably to do this by picking a number (say 100), and if I < 100 use the default, otherwise S/I. But once you start to wonder whether 100 is the right number, you might as well go all the way and do things in a principled way using probabilities.

The solution

Jumping to the bottom line: the formula will be (S + α)/(I + γ). This clearly satisfies the desire to be near S/I when S and I are large. It also implies that the default value is α/γ, since that’s what you get when S=I=0.  In the rest of this post I will explain two things.  First, how to pick α and γ (there is a right way and a wrong way).  And second, where the shape of the formula (S + α)/(I +γ) comes from.  If you’re familiar with Laplace smoothing then you might think of using (S+1)/(I+1), and our formula is a generalization of that.  But it still begs the question — why a formula of this form, rather than, for example, a weighted sum (1 - e^{-\alpha I})(S/I) + e^{-\alpha I}(\alpha/\gamma).

The formula (S + α)/(I +γ) comes by imagining that at each impression, there is a probability of an associated sale, and then returning the best estimate of that probability instead of returning S/I. I’ll start with the simplest way of implementing this idea (although it is too simple to work well).

Suppose the probability of a sale has a fixed universal value p, so that whenever a user is shown an item, there is a probability p that the item is sold. This is a hypothetical model of how users behave, and it’s straightforward to test if it fits the data. Simply pick a set of items, each with an observed sale count and impression count. If the simple model is correct, then an item with n impressions will receive k sales according to the binomial formula:

    \[ \Pr(\mbox{getting } k \mbox{ sales}) = \binom{n}{k} p^{k}(1-p)^{n - k} \]

Here n is the number of impressions and k the number of sales. As mentioned earlier, this whole discussion also works for other meanings of k and n, such as k is clicks and n is impressions. To test the simple model, I can compare two sets of data. The first is the observed pairs (n,k). In other words, I retrieve historical info for each item, and record n impressions and k sales. I construct the second set by following the simple model: I take the actual number of impressions n, and randomly generate the number of sales k according to the formula above. Below is a histogram of the two data sets. Red is simulated (the model), and blue is actual. The match is terrible.

Bayes plot1

Here is some more detail on the plot: Only items with a nonzero sale count are shown. In the simulation there are 21% items with S=0, but the actual data has 47%.

So we need to go to a more sophisticated model. Instead of a fixed value of p, imagine drawing p from a probability distribution and plugging it into the inset equation, which is then used to get the random k. As you can see in the plot below, the two histograms have a much more similar shape than the previous plot, and so this model does a better job of matching the actual data.

Bayes plot2

Now it all boils down to finding the distribution for p. Since 0 \leq p \leq 1, that means finding a probability distribution on the interval [0, 1]. The most common such distribution is the Beta distribution, which has two parameters, \alpha and \beta. By assuming a Beta distribution, I reduce the problem to finding \alpha and \beta (and yes, this α is the same one as in the formula (S + α)/(I +γ)). This I will do by finding the values of \alpha and \beta that best explain the observed values of k and n. Being more precise, associated to each of N historical items is a sale count k_i and an impression count n_i, with 1 \leq i \leq N.

I was perhaps a bit flip in suggesting the Beta distribution because it is commonly used. The real reason for selecting Beta is that it makes the computations presented in the Details section below much simpler. In the language of Bayesian statistics, the Beta distribution is conjugate to the binomial.

At this point you can fall into a very tempting trap. Each k_i/n_i is a number between 0 and 1, so all the values form a histogram on [0,1]. The possible values of p follow the density function for the Beta distribution and so also form a histogram on [0,1]. Thus you might think you could simply pick the values of \alpha and \beta that make the two histograms match as closely as possible. This is wrong, wrong, wrong. The values k_i/n_i are from a discrete distribution and often take on the value 0. The values of p come from a continuous distribution (Beta) and are never 0, or more precisely, the probability that p=0 is 0. The distributions of k/n and of p are incompatible.

In my model, I’m given n and I spit out k by drawing p from a Beta distribution. The Beta is invisible (latent) and indirectly defines the model. I’ll give a name to the output of the model: X. Restating, fix an n and make X a random variable that produces value k with the probability controlled indirectly by the Beta distribution. I need to match the observed (empirical) values of (n_i, k_i) to X, not to Beta. This is the empirical Bayes part. I’ll give an algorithm that computes \alpha and \beta later.

But first let me close the loop, and explain how all this relates to (S + α)/(I + γ). Instead of reporting S/I, I will report the probability of a sale. Think of the probability as a random variable — call it P. I will report the mean value of the random variable P. How to compute that? I heard a story about a math department that was at the top of a tall building whose windows faced the concrete wall of an adjacent structure. Someone had spray-painted on the wall “don’t jump, integrate by parts.” If it had been a statistics department, it might have said “don’t jump, use Baye’s rule.”

Baye’s rule implies a conditional probability. I want not the expected value of P, but the expected value of P conditional on n impressions and k sales. I can compute that from the conditional distribution \Pr(P = p \:|\: (n,k)). To compute this, flip the two sides of the | to get \Pr((n,k) \:|\: P=p). This is \Pr(\mbox{getting } k \mbox{ sales}), which is just the inset equation at the beginning of this post!

Now you probably know that in Baye’s rule you can’t just flip the two sides, you also have to include the prior. The formula is really \Pr(P = p \:|\: (n,k)) = \mbox{constant} \times \Pr((n,k) \:|\: P = p) \Pr(P=p). And \Pr (P=p) is what we decided to model using the Beta distribution with parameters \alpha and \beta. These are all the ingredients for Empirical Bayes. I need \Pr(P = p \:|\: (n,k)), I evaluate it using Baye’s rule, the rule requires a prior, and I use empirical data to pick the prior. In empirical Bayes, I select the prior that best explains the empirical data. For us, the empirical data is the observed values of (n_i, k_i). When you do the calculations (below) using the Beta(\alpha, \beta) distribution as the prior, you get that the mean of P is (S + α)/(I + γ) where γ = α + β.

How does this compare with the simplistic method of using S/I when I > δ, and η otherwise? The simplistic formula involves two constants δ and η just as the principled formula involves two constants α and γ. But the principled method comes with an algorithm for computing α and γ given below. The algorithm is a few lines of R code (using the optimx package).

The details

I’ll close by filling in the details. First I’ll explain how to compute \alpha and \beta.

I have empirical data on N items. Associated with the i-th item (1 \leq i \leq N) is a pair (k_i, n_i), where k_i might be the number of sales and n_i the number of impressions, but the same reasoning works for clicks instead of sales. A model for generating the (k_i, n_i) is that for each impression there is a probability p that the impression results in a sale. So given n_i, the probability that k_i = j is \binom{n_i}{j} p^{j}(1-p)^{n_i - j}. Then I add in that the probability p is itself random, drawn from a parametrized prior distribution with density function f_\theta(p). I generate the (k_i, n_i) in a series of independent steps. At step i, I draw p_i from f_\theta(p), and then generate k_i according to the binomial probability distribution on k_i:

    \[ \mbox{Prob}(k_i = j) = \binom{n_i}{j} p_i^{j}(1-p_i)^{n_i - j} \]

Using this model, the probability of seeing (k_i, n_i) given n_i is computed by averaging over the different possible values of p, giving

    \[ q_i(\theta) = \int_0^1 \binom{n_i}{k_i} p^{k_i}(1-p)^{n_i - k_i} f_\theta(p) dp \]

I’d like to find the parameter \theta that best explains the observed (k_i, n_i) and I can do that by maximizing the probability of seeing all those (n_i, k_i). The probability seeing (n_i, k_i) is q_i(\theta), the probability of seeing the whole set is \prod_i q_i(\theta) and the log of that probability is \sum_i \log q_i(\theta). This is a function of \theta, and I want to find the value of \theta that maximizes it. This log probability is conventionally called the log-likelihood.

Since I’m assuming f_\theta(p) is a beta distribution, with \theta = (\alpha, \beta), then q_i(\theta) becomes

    \begin{eqnarray*} q_i(\alpha, \beta) & = & \binom{n_i}{k_i} \int_0^1 p^{k_i}(1-p)^{n_i - k_i} \frac{ \Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)} p^{\alpha-1}(1-p)^{\beta-1} dp \\ & = & \binom{n_i}{k_i} \frac{ \Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)} \int_0^1 p^{k_i + \alpha -1}(1-p)^{n_i +\beta - k_i - 1} dp \\ & = & \binom{n_i}{k_i} \frac{B(\alpha + k_i, n_i + \beta - k_i)}{B(\alpha, \beta)} \end{eqnarray*}

The calculation above uses the definition of the beta function B and the formula for the beta integral

    \begin{eqnarray*} B(\alpha,\beta) & = & \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)} \\ \int_0^1 x^{\alpha-1} (1-x)^{\beta-1} dx & = & B(\alpha, \beta) \end{eqnarray*}

If you don’t want to check my calculations, q_i(\alpha, \beta) is just the beta-binomial distribution, and you can find its formula in many books and web pages.

Restating, to find \alpha, \beta is to maximize the log-likelihood l(\alpha, \beta) = \sum_i \log q_i(\alpha, \beta), specifically

    \[ l(\alpha, \beta) = \sum_i \left( \log \binom{n_i}{k_i} + \log B(\alpha + k_i, n_i + \beta - k_i) - \log B(\alpha, \beta) \right) \]

And since the first term doesn’t involve \alpha or \beta, you only need to maximize

    \[ \sum_{i=1}^N \log B(\alpha + k_i, n_i + \beta - k_i) - N\log B(\alpha, \beta) \]

The method I used to maximize that expression was the optimx package in R.

The final missing piece is why, when I replace S/I with the probability that an impression leads to a sale, the formula is (k + \alpha)/(n + \gamma).

I have an item with an unknown probability of sale p. All that I do know is that it got k sales out of n impressions. If P is the random variable representing the sale probability of an item, and F = (k,n) is a random variable representing the sale/impression of an item, I want \Pr(P = p \:|\: F = (k, n)), which I write as \Pr(p \:|\: k, n) for short. Evaluate this using Baye’s rule,

    \[ \Pr(p \:|\: k, n) = \Pr(k,n \:|\: p) \Pr(p) / \Pr(k,n) \]

The \Pr(k,n) term can be ignored. This is not deep, but can be confusing. In fact, any factor \phi(k,n) involving only k and n (like 1/\Pr(k,n)) can be ignored. That’s because \int \Pr(p \:|\: k, n) dp = 1, so if \Pr(p \:|\: k, n) =f(p,k,n)\phi(k,n) it follows that \phi can be recovered from f(p,k,n) using \phi(k,n) = 1/\int(f(p,k,n)dp. In other words, I can simply ignore a \phi and reconstruct it at the very end by making sure that \int \Pr(p \:|\: k, n) dp = 1.

I know that

    \[ \Pr(k,n \:|\: p) = \binom{n}{k} p^k(1-p)^{n-k} \]

For us, the prior \Pr(p) = f_\theta(p) = f_{\alpha, \beta}(p) is a beta distribution, \mbox{Beta}_{\alpha, \beta}(p) = p^{\alpha-1}(1~-~p)^{\beta-1}/B(\alpha, \beta). Some algebra then gives

    \[ \\Pr(p \:|\: k, n) \propto \Pr(k,n \:|\: p) \Pr(p) \propto \mbox{Beta}_{\alpha + k, \beta + n - k}(p) \]

The \propto symbol ignores constants involving only k and n. Since the rightmost term integrates to 1, the proportionality is an equality:

    \[ \Pr(p \:|\: k, n) = \mbox{Beta}_{\alpha + k, \beta + n - k}(p) \]

For an item with (k,n) I want to know the value of p, but this formula gives the probability density for p. To get a single value I take the mean, using the fact that the mean of \mbox{Beta}_{\alpha, \beta} is \alpha/(\alpha+\beta). So the estimate for p is

    \[ \mbox{Mean}(\mbox{Beta}_{\alpha + k, \beta + n - k}) = \frac{\alpha + k}{\alpha + \beta + n} \]

This is just (S + α)/(I + γ) with γ = α + β.

There’s room for significant improvement. For each item on eBay, you have extra information like the price. The price has a big effect on S/I, and so you might account for that by dividing items into a small number of groups (perhaps low-price, medium-price and high-price), and computing \alpha, \beta for each. There’s a better way, which I will discuss in a future post.

Read the whole story
Share this story

Putting a price on the priceless


In their latest full episode, Radiolab examines the concept of worth, particularly when dealing with things that are more or less priceless (like human life and nature).

This episode, we make three earnest, possibly foolhardy, attempts to put a price on the priceless. We figure out the dollar value for an accidental death, another day of life, and the work of bats and bees as we try to keep our careful calculations from falling apart in the face of the realities of life, and love, and loss.

I have always really liked Radiolab, but it seems like the show has shifted into a different gear with this episode. The subject seemed a bit meatier than their usual stuff, the reporting was close to the story, and the presentation was more straightforward, with fewer of the audio experiments that some found grating. I spent some time driving last weekend and I listened to this episode of Radiolab, an episode of 99% Invisible, and an episode of This American Life, and it occurred to me that as 99% Invisible has been pushing quite effectively into Radiolab's territory, Radiolab is having to up their game in response, more toward the This American Life end of the spectrum. Well, whatever it is, it's great seeing these three radio shows (and dozens of others) push each other to excellence.

Tags: audio   death   economics   podcasts   Radiolab
Read the whole story
30 days ago
I thought the episode from last year on Translation from RadioLab was amazingly good.
Share this story
1 public comment
31 days ago
flagging to revisit. i havent listened to radiolab in months

★ Siri Improvements

2 Comments and 3 Shares

I’ve noticed over the past year that Siri is getting faster — both at parsing spoken input and returning results. I use iOS’s voice-to-text dictation feature on a near-daily basis, and it’s especially noticeable there. I’ve been using a Moto X running Android 5.0 the past few weeks, so today I did a side-by-side comparison between Siri and Android’s Google Now, asking both the simple question, “What temperature is it outside?” Both phones were on the same Wi-Fi network. Siri was consistently as fast or faster. I made a video that shows them in pretty much a dead heat.

My point here isn’t “Siri is better than Google Now”, or even “Siri is as good as Google Now”. Once you get past the superficial level, they’re different enough that it’s hard to make a blanket one-is-better-than-the-other comparison. I’d even agree that Google Now is better at many complex queries, and, further, that “What’s the temperature?” is a very simple question.

But: it’s a question I ask Siri almost every day, before I get dressed, especially during winter. I want to know whether it’s going to be just plain cold, or really fucking cold. When Siri debuted in 2011, it was often (usually?) relatively slow to parse your spoken input, and slow to return results. Your mileage may vary, but for me that just isn’t true any longer. Siri has also gotten much, much better while on cellular networks. Part of that is surely that LTE networks are maturing, but I suspect part of it is Apple’s doing as well.

Nor is my point about which service presents the information in a more attractive or useful layout. My point here is simply this: Siri is noticeably faster than it used to be. Even just a year ago, I don’t think Siri could have held its own with Google Now pulling information like the current temperature or sports scores, but today, it does. Apple has clearly gotten much better at something everyone agreed was a serious weakness. Two years later, I don’t think “Google is getting better at design faster than Apple is getting better at web services” feels true any more.


  • After I posted that video to Twitter, DF reader Steven Op de beeck made an overlay showing his results in Belgium. Outstanding Siri performance.

  • Here’s a Storify Storified collection of just about every response to my “Just me, or is Siri getting a lot faster?” tweet.

  • My 2010 piece for Macworld, “This Is How Apple Rolls”, on the company’s pattern of steady, iterative year-over-year improvements to its products, seems apt.

  • I think this is a case that shows how important first impressions are. Quite a few of the responses I got on Twitter were along the lines of, “I don’t know, I gave up on Siri years ago.” No product or feature is ever perfect when it debuts. Quite the opposite, brand-new products/features usually debut needing numerous obvious improvements. But, ideally, they should debut on the right side of the “good enough to engender affection” line. The original iPhone had no third-party apps, EDGE networking, and lacked copy-and-paste. But we loved it. Siri, I think it’s clear in hindsight, debuted on the wrong side of that line. It’s harder to change a negative perception than it is to create a positive one from a blank slate.

  • Lastly, a rather obvious but important observation: Improvements to Siri across the board — reducing latency, improving accuracy, increasing utility — are essential to the success of Apple Watch. And — given the previous note on first impressions — it’s pretty important that Siri integration on Apple Watch work well right from the start. Apple will find itself in a deep hole if voice dictation via Apple Watch gets saddled with an “Egg Freckles”/”Eat up Martha” reputation.

Read the whole story
46 days ago
I asked Siri "What is horchata?" the other day and got the right answer. I don't use it very often and was surprised.
46 days ago
Today I needed to return a rental car, so I asked Siri to "take me to the nearest gas station." She gave me turn-by-turn directions to a Marriott hotel, no gas anywhere in sight. I asked Google, and it found me an actual gas station 4 blocks away.

When it comes to map data, you don't get many do-overs. Why would I bother with Siri again after that?
Sioux City, Iowa
39 days ago
pedantic or not, that's a fault of the map data; not Siri. Apple's map data is terrible, that fact is undisputed. But "i don't use Siri for directions again" is a different sentence than "I won't use Siri again"
39 days ago
I assure you end users are not worried about the difference between Siri and Apple's map data. Bad answers are bad answers, and if Apple isn't confident in Siri's ability to give good answers involving maps, Siri should politely offer to Google that for you.
Share this story
Next Page of Stories