Archive for the 'Amazon' Category


Amazon’s iPhone mobile app and privacy

The new Amazon mobile app for the iPhone is excellent – well-designed, a great mix of iPhone and Amazon visuals, and easy to use. Despite the lack of Gold Box integration, I’ll use it all the time.

And the idea of Amazon Remembers – a place for you to take photos and have them stored online – that’s neat, though obviously overlapping with dozens of other photo storage services. It also offers product identification – take a picture of something and have people tell you what it is.

Remembers, though, breaks the social contract: it makes pictures that reasonable users might assume will be private, and makes them public.

As has been documented online, when you take a picture and upload it to Amazon Remembers, it’s sent out to Mechanical Turk users for product identification. That makes sense – product identification is hard and mturk is great for these kinds of problems – except that

  1. People might use Amazon Remembers for non-product images. The marketing materials talk about how you _can_ use it for product images, but not that it’s the only use of the service. Here’s the first page of the Remembers tab in the app:

    Amazon Remembers iPhone image, first screen 

    And here’s the pre-picture screen:

    Amazon Remembers, Screen 2

    No content on either screen assumes that it must be a product, or that other people besides you will see it.

    For the user who does click “What happens to my photos?”, you get partial information:

    Amazon Remembers, Screen 3
    This does say that people look at it, but it’s not clear who those people are.

  2. Every picture is then visible to strangers via Mechanical Turk. There’s no obvious pre-screening of images to make sure they’re acceptable. Keep in mind that Mechanical Turk respondents are not Amazon employees – they simply can’t be expected to keep private images private and not to violate people’s privacy. To test this, I took an image of a young-looking, non-recognizable Dakota Fanning and added a fake name and address to it: I then took a photo of that image and added it to Amazon Remembers. Here’s the image:

    And here’s the Mechanical Turk hit that appeared <15 seconds after the image was sent up:
    Amazon Mechanical Turk for Amazon Remembers
    So to review: I posted a picture of a minor with a name and an address, and that picture showed up on Mechanical Turk, in front of strangers, immediately. (To their credit, the Turkers who saw this image did recognize it as Dakota Fanning, and sent me to this head shot.)

The violation is that people will assume that memories are private. This isn’t the same thing as Amazon’s Customer Images, for example, where customers know when they’re using the feature that the images show up on the product page. 

The principle is pretty simple: if you’re going to take something that people might assume to be private and make it public, you have to be explicit about it. This approach is either poorly-thought-through or sneaky, but either way, it’s a violation of customer’s trust in Amazon. This problem could be seriously reduced with much clearer messaging, or eliminated with employee human review (which almost certainly won’t happen). 

I don’t know how large this problem is – I’ve looked at ~30 submitted images, and only one was an obviously personal image, and had no (obvious) identifying info (though there could be data encoded in the image). But the people picking up the app now are likely early adopters, and are perhaps more likely to be well-informed or to read the fine print. 

(Disclosure: I worked at Amazon for four years, working on customer-facing features that dealt with similar issues.)


On Apple, Amazon, Reviewing, and Large Companies

Two interesting stories going around in the last week, which I find to be similar (even if their impact is different): Apple’s being raked over the coals for rejecting an iPhone app that duplicates Apple functionality, and Amazon’s been dealing with the customer review attack on Spore, including purging and then restoring all of the reviews.

As I’ve mentioned before, I used to manage the Amazon customer reviews business, and so I know very well what the current team is going through. My assumption is that the Apple app store review business has some similar processes and problems. Here are some things I learned while dealing with this:

You start with some philosophical rules, and you try to make them stick. Providing guidelines is the only way to start. Example philosophies for Amazon (made-up, these aren’t real, don’t quote them anywhere else) could include “our customer is the Amazon buyer” (so no, Ms. Vendor, we won’t take down the negative reviews of your book, even though you spend a lot of money on advertising with us), “we eliminate reviews with demonstrably false information”, and “fairness is more important than justice” (so if you generally write good reviews and then get caught plagiarizing once, you can be given more chances). 

All sensible on face and all make sense to folks who think in these kinds of abstractions all day – there may still be debate but these are good places to start. 

There’s a clear chain of command for decisions. The escalation path from “customer service rep in her fourth week receives a review complaint in the mail queue” to “Jeff decides the review stays” should be very clear. (In my ~2 years dealing with customer reviews, btw, Jeff only engaged once on actual content, and the issue was much larger than just reviews (and he was getting hundreds of mails on this topic) – he generally trusted the heads of these teams to do the right thing as long as they could articulate the philosophy.)

All of this sounds good, of course, but then people get involved. And customer service reps are trying to interpret the philosophies (if they can find them among hundreds of pages of other rules), and some of them are judgment calls (what is “demonstrably false?” If I say “the defibrillator didn’t work and my dad died,” is someone going to check? are comments on voting records trustworthy? etc.) that different people will make, and of course you don’t want Jeff or Steve Jobs or anyone making every decision.

So it’s messy, and when it’s messy, strange things happen – reviews appear and disappear, apps go away and come back (like Netshare), etc. 

This is a long way of saying that it’s entirely likely that the banning of Podcaster is a problem of human judgment in a theoretically well-structured system – not least because the decision seems inconsistent – and that could easily come back, not because of a correction of a philosophy, but because of a correction of a human error.

Now, it’s Apple’s responsibility to make that correction, and then to treat the errant employee with respect and look at how the company can do a better job. 


Twitter Updates