Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
To make a long story short, Kristina A. Fox received a supposedly minimally invasive laparoscopy in 1998. Unfortunately, the wand-like electrical tool that cuts tissues and seals blood vessels was emanating an undetected stray electrical charge that created a small hole in her colon. The complications resulted in 13 operations and serious complications. Her lawsuit argues that the risk of accidents from laparoscopic surgery could be sharply reduced with the use of fault-detection static and dynamic testing devices that are currently available but used by only 1/4 of U.S. hospitals. A gynecologist is quoted as saying "It wouldn't surprise me in the least if [this problem] caused more than 100 deaths and 10,000 injuries annually." [Source: Barnaby J. Feder, Surgical Device Poses a Rare but Serious Peril *The New York Times*, 17 Mar 2006; PGN-ed; thanks to Lauren Weinstein for noting this one.]
The network that stitches together radars, missile launch sites and command control centers for the Missile Defense Agency (MDA) ground-based defense system has such serious security flaws that the agency and its contractor, Boeing, may not be able to prevent misuse of the system, according to a Defense Department Inspector General's report. The report, released late last month, said MDA and Boeing allowed the use of group passwords on the unencrypted portion of MDA's Ground-based Midcourse Defense (GMD) communications network. The report said that neither MDA nor Boeing officials saw the need to install a system to conduct automated log audits on unencrypted communications and monitoring systems. Even though current DOD policies require such automated network monitoring, such a requirement ``was not in the contract''. [...] [Source: Bob Brewin, *Federal Computing Week*, 16 Mar 2006] http://www.fcw.com/article92640-03-16-06-Web&newsletter%3Dyes Gabriel Goldberg, Computers and Publishing, Inc., 3401 Silver Maple Place, Falls Church, VA 22042 <http://www.cpcug.org/user/gabe> 1-703-204-0433
Buried deep, deep in the small print in the back pages of the Tesco (the leading UK supermarket chain) mobile phone service information booklet, is a brief sentence that if the customer wishes not to be involved in "market research", to wit, having their demographic details tracked and shared, they need to phone customer services and opt out. Yesterday, my phone number transferred to the Tesco network and I put twenty pounds into my account. This morning, I received an advertising SMS from Tesco. This reminded me I needed to call customer services regarding "market research" and advertising SMSs. Now, and here's where it gets interesting, they tell me I'm not involved in "market research" because I don't have a club card (a loyalty card) - but that for the same reason they *cannot unsubscribe me from advertising SMSs*. Tesco can *add* you to their advertising SMS list without a club card, but they cannot *remove* you without a club card. What customer services do in this situation is that the shift manager visits a store, picks up a blank club card, registers it to the customer and unsubcribes them.
I work in a three-story office block. Being so high, the building is equipped with a pair of elevators, which appear to co-operate in handling passenger traffic. These are modern elevators, equipped with a female chip-voice announcing which floor the elevator has arrived at and which direction the elevator is about to travel in. The upper floors of the building are lightly populated and so the bathroom facilities on those floors are considerably more pleasant and less crowded. I recently emerged from a pleasant, uncrowded bathroom and pressed the button summoning an elevator. An elevator arrived, its doors opened and a female chip voice announced the arrival of the elevator at the first floor. The chip voice was muffled since it was coming from the *other* elevator, which was also on the first floor with closed doors. I stepped inside and selected "down". The muffled other elevator announced I was about "going down" and the doors closed and the elevator took me to the ground floor. The doors opened and the female chip voice from the floor above then floated down to me..."ground floor". I'm grateful that it was not necessary for me to operate the controls in the other elevator. If that had been so, I wonder if the emergency button would still have worked?
On Friday, March 10, McAfee's antivirus program gave users a nice lesson on the meaning of the term "trusted system". Due to a faulty virus definition file, the software began deleting or "quarantining" hundreds or thousands of legitimate system files (including, among others, Microsoft's excel.exe). http://www.realtechnews.com/posts/2802 http://blog.washingtonpost.com/securityfix/2006/03/mcafee_update_flags_hundreds_o.html
What: International Airport Centers sues former employee, claiming use of a secure file deletion utility violated federal hacking laws. When: Decided March 8 by the U.S. Court of Appeals for the 7th Circuit Outcome: Federal hacking law applies, the court said in a 3-0 opinion written by Judge Richard Posner. What happened, according to the court: Jacob Citrin was once employed by International Airport Centers and given a laptop to use in his company's real estate related business. The work consisted of identifying ``potential acquisition targets''. At some point, Citrin quit IAC and decided to continue in the same business for himself, a choice that IAC claims violated his employment contract. Normally that would have been a routine business dispute. But the twist came when Citrin dutifully returned his work laptop--and IAC tried to undelete files on it to prove he did something wrong. IAC couldn't. It turned out that (again according to IAC) Citrin had used a ``secure delete'' program to make sure that the files were not just deleted, but overwritten and unrecoverable. In most operating systems, of course, when a file is deleted only the reference to it in the directory structure disappears. The data remains on the hard drive. But a wealth of programs like PGP, open-source programs such as Wipe, and a built-in feature in Apple Computer's OS X called Secure Empty Trash will make sure the information has truly vanished. Inevitably, perhaps, IAC sued. The relevance for Police Blotter readers is that the company claimed that Citrin's alleged secure deletion violated a federal computer crime law called the Computer Fraud and Abuse Act. That law says whoever ``knowingly causes damage without authorization'' to a networked computer can be held civilly and criminally liable. The 7th Circuit made two remarkable leaps. First, the judges said that deleting files from a laptop counts as ``damage''. Second, they ruled that Citrin's implicit ``authorization'' evaporated when he (again, allegedly) chose to go into business for himself and violate his employment contract. ... [URL added in archive copy; http://news.com.com/Police+blotter+Ex-employee+faces+suit+over+file+deletion/2100-1030_3-6048449.html ]
http://edition.cnn.com/2006/US/03/11/cia.internet.ap/index.html An interesting article, reiterating what we already know, that in the present age of search tools, almost nothing can be hidden from those willing to pay for someone to do a search. The article basically says it succinctly.
My wife just received what from the Subject line looked like a Paypal buyer payment notice to her, as a seller. But she hasn't recently sold anything. Having been taught to be very careful, she looked at the message source before opening it. She then checked Paypal to confirm there had been no payment to her corresponding to this message. So far so good, but now come the gotchas.... When she went to forward the message (using Outlook Express) to spoof AT paypal.com, of course the message was opened and displayed. I figured this was safe, since she would not open any attachments. But, it turns out the content of the message was a small bit of HTML, composed of just an image with a clickable area. In recent versions of IE, such images would be prevented from being downloaded unless approved. Not so in Outlook Express. It retrieved the image, presumably identifying my wife's computer to the originating web site. Oops. But the interesting part is that the image was a good likeness of a Paypal message, with a complete bogus transaction (to pay money to my wife) and a button to click labeled "Dispute Transaction". They are specifically preying on people who want to correct mistakes and give money back. That is, preying on ethical people, not greedy people. We didn't click on it. I have no idea what would have happened next, but probably a request to log into her account to confirm the transaction was incorrect.
When I worked at Bell Labs, before the breakup of the Bell System in 1984, once in a while a memo would come around describing a change to some piece of hardware or other that I knew nothing about. The reason would be that the hardware was so widely used throughout the company that the easiest way to reach everyone who might care about it was to send a memo to every member of management. As an "exempt" employee, I was considered a member of management although I had no one reporting to me. One day I received a memo that announced that as of some date, 0.511-microfarad capacitors were going to be replaced by 0.51-microfarad capacitors, and the old ones would no longer be available. The punchline? In both cases, the capacitors had +/- 10% tolerance. [Unfortunately, the capacity of management often has a tolerance greater than +/- 10%. PGN]
This problem affects more than consumer devices. When I was working on a project for the Army in the very early 1970's regarding repairing tank engines, a significant fraction (1/4 - 1/3) of tank engines that were sent back from Viet Nam had "nothing wrong" with them. (I don't know how they came up with this statistic — we've all gotten our cars back from the repair shop, only to find that the problem that we took the car into the shop for in the first place had not been fixed.) Of course, a number of the broken engines had been "hacked" — i.e., "hot-rodded" by some good ol' boys, so they failed in a sometimes spectacular manner. But that is a different story...
Frequent users of MS-Excel know to format the cells as Text *before* entering data or put a single quote in front of any data to have it stay as-is. '1DEC won't change to 01-Dec '2310009E13 won't change to 2.31E+19
I'm glad (well, not really) I'm not the only one who's seen their data swallowed by Excel. I have seen this with data provided by a custom application for a 'large northeastern USA transit operator'. Basically, there's certain data that's represented in hexadecimal format. When the output file (comma separated values, csv) is brought into Excel, Excel 'helpfully' converts some of these into scientific notation! And you can't turn it off or unformat it, period. It's converted and that's that. I haven't bothered calling MS, representing data as typed is apparently too advanced a concept for them to understand... At least what we're looking at isn't super critical, and Excel is only one tool that we use. The scary thing, though, is there's no warning, and you can't turn it off. Who knows what other liberties MS takes with your data...
I do not see this as a problem in Excel, which is a spreadsheet program designed primarily for accounting calculations - calculations which commonly use numbers and dates. Rather, this is a problem introduced by the designer(s) of the "new bioinformatics programs", who seem to have decided to use Excel as a database program. A crowbar can be used as a hammer, but one runs the risk of making a large hole in a wall instead of simply driving in the nail. Similarly, using Excel as a database instead of Oracle, or SQL Server, or even MS Access (if for some reason use of the MSOffice suite is desired) runs the risk of non-accounting data being interpreted as accounting data. John J. Deltuvia, Jr, Technology Unit, NJAOC Probation Services - CSES
Excel doesn't play well with others. This is not the only kind of data Excel garbles. In the financial world, we use CUSIPs (8 or 9 character codes) or tickers (1- to 5-letter codes) to identify equities. Excel typically garbles these by being over-helpful as mentioned in the article on micro-array data. So, CUSIPs are sometimes left alone and often treated as numbers because they are a mix of letters and numerals with the a preponderance of numerals. Tickers are less commonly mangled though there is a company with the ticker "TRUE" which Excel decides is the value "TRUE", not the character string. However, potentially even more insidious is the fact that Excel does not properly handle .CSV files. This "Comma-Separated Values" format has been around for decades but Excel has never handled it properly. Both on input and output, it will often ignore the double-quotes that are intended to distinguish character from numeric fields. Because of this, the obvious solution of putting CUSIPs and tickers in quotes does not work with Excel. Perhaps even worse, there are applications that expect the Excel variant of .CSV files and reject properly-formatted ones. To see how ridiculously complicated this can get, look at the section "Excel vs. Leading Zero & Space" in http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm#CSVariations. Thus the risk of the popular error propagating, muddying the waters for years to come.
The local emergency number, 000, is reported to have incorrect address information: http://theage.com.au/news/NATIONAL/Telstra-to-upgrade-Triple0-database/2006/03/14/1142098425868.html There is no media release on the Telstra website relating to this.
The Symposium will be held May 21-24 at the Claremont Resort in Berkeley, California. See http://www.ieee-security.org/TC/SP2006/oakland06.html Session: Signature Generation (Christopher Kruegel) Towards Automatic Generation of Vulnerability-Based Signatures David Brumley, James Newsome, Dawn Song, Hao Wang, and Somesh Jha Carnegie Mellon University, USA, and University of Wisconsin, USA Misleading Worm Signature Generators Using Deliberate Noise Injection Roberto Perdisci, David Dagon, Wenke Lee, Prahlad Fogla, and Monirul Sharif University of Cagliari, Italy, and Georgia Institute of Technology, USA Hamsa: Fast Signature Generation for Zero-day Polymorphic Worms with Provable Attack Resilience Zhichun Li, Manan Sanghi, Yan Chen, Ming-Yang Kao and Brian Chavez Northwestern University, USA Session: Detection (Robert Cunningham) Dataflow Anomaly Detection Sandeep Bhatkar, Abhishek Chaturvedi and R. Sekar Stony Brook University, USA Towards a Framework for the Evaluation of Intrusion Detection Systems Alvaro A. Cardenas, Karl Seamon and John S. Baras University of Maryland, USA Siren: Detecting Evasive Malware (Short Paper) Kevin Borders, Xin Zhao and Atul Prakash University of Michigan, USA Session: Privacy (Carl Landwehr) Fundamental Limits on the Anonymity Provided by the MIX Technique Dakshi Agrawal, Dogan Kesdogan, Vinh Pham, Dieter Rautenbach IBM T J Watson Research Center, USA, RWTH Aachen, Germany, and University of Bonn, Germany Locating Hidden Servers Lasse O/verlier and Paul Syverson Norwegian Defence Research Establishment, Norway, Gjøvik University College, Norway, and Naval Research Laboratory, USA Practical Inference Control for Data Cubes (Extended Abstract) Yingjiu Li, Haibing Lu and Robert H. Deng Singapore Management University, Singapore Deterring Voluntary Trace Disclosure in Re-encryption Mix Networks Philippe Golle, Xiaofeng Wang, Markus Jakobsson and Alex Tsow Palo Alto Research Center, USA, and Indiana University, Bloomington, USA New Constructions and Practical Applications for Private Stream Searching (Extended Abstract) John Bethencourt, Dawn Song and Brent Waters Carnegie Mellon University, USA, and SRI International, USA 5-minute Work-in-Progress Talks Session: Formal Methods (Susan Landau) A Computationally Sound Mechanized Prover for Security Protocols Bruno Blanchet CNRS, Ecole Normale Supe'rieure, Paris, France A Logic for Constraint-based Security Protocol Analysis Ricardo Corin, Ari Saptawijaya and Sandro Etalle University of Twente, The Netherlands, and University of Indonesia, Indonesia Simulatable Security and Concurrent Composition Dennis Hofheinz and Dominique Unruh CWI, The Netherlands, and University of Karlsruhe, Germany Session: Analyzing and Enforcing Policy (Tuomas Aura) Privacy and Contextual Integrity: Framework and Applications Adam Barth, Anupam Datta, John C. Mitchell and Helen Nissenbaum Stanford University, USA, and New York University, USA FIREMAN: A Toolkit for FIREwall Modeling and ANalysis Lihua Yuan, Jianning Mai, Zhendong Su, Hao Chen, Chen-Nee Chuah and Prasant Mohapatra University of California, Davis, USA Retrofitting Legacy Code for Authorization Policy Enforcement Vinod Ganapathy, Trent Jaeger and Somesh Jha University of Wisconsin-Madison, USA, and Pennsylvania State University, USA Session: Analyzing Code (Doug Tygar) Deriving an Information Flow Checker and Certifying Compiler for Java Gilles Barthe, David A. Naumann and Tamara Rezk INRIA Sophia-Antipolis, France, and Stevens Institute of Technology, USA Discovering Malicious Disks with Symbolic Execution Paul Twohey, Junfeng Yang, Can Sar, Cristian Cadar, and Dawson Engler Stanford University, USA Pixy: A Static Analysis Tool for Detecting Web Application Vulnerabilities Nenad Jovanovic, Christopher Kruegel and Engin Kirda Vienna University of Technology, Austria Cobra: Fine-grained Malware Analysis using Stealth Localized-Executions Amit Vasudevan and Ramesh Yerraballi University of Texas Arlington, USA Session: Authentication (Paul Van Oorschot) Integrity (I) codes: Message Integrity Protection and Authentication Over Insecure Channels Mario Cagalj, Srdjan Capkun, Ramkumar Rengaswamy, Ilias Tsigkogiannis, Mani Srivastava and Jean-Pierre Hubaux Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland, Technical University of Denmark, Denmark, and University of California, Los Angeles, USA Cognitive Authentication Schemes Safe Against Spyware Daphna Weinshall, Hebrew University of Jerusalem, Israel Cache Cookies for Browser Authentication (Extended Abstract) Ari Juels, Markus Jakobsson and Tom N. Jagatic RSA Laboratories, USA, RavenWhite Inc., USA, and Indiana University, USA Secure Device Pairing based on a Visual Channel Nitesh Saxena, Jan-Erik Ekberg, Kari Kostiainen and N. Asokan University of California, Irvine, USA, and Nokia Research Center, Finland Session: Attacks (Kevin Fu) SubVirt: Implementing malware with virtual machines Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, Jacob R. Lorch, University of Michigan, USA, and Microsoft Research, USA Practical Attacks on Proximity Identification Systems (Short Paper) Gerhard P. Hancke, University of Cambridge, UK On the Secrecy of Timing-Based Active Watermarking Trace-Back Techniques Pai Peng, Peng Ning and Douglas S. Reeves, North Carolina State University, USA Session: Systems (Helen Wang) A Safety-Oriented Platform for Web Applications Richard S. Cox, Jacob Gorm Hansen, Steven D. Gribble, and Henry M. Levy University of Washington, USA, and University of Copenhagen, Denmark Tamper-Evident, History-Independent, Subliminal-Free Data Structures on PROM Storage -or- How to Store Ballots on a Voting Machine (Extended Abstract) David Molnar, Tadayoshi Kohno, Naveen Sastry and David Wagner University of California, Berkeley, USA, and University of California, San Diego, USA Analysis of the Linux Random Number Generator Zvi Gutterman, Benny Pinkas and Tzachy Reinman Hebrew University, Israel, Haifa University, Israel, and Safend, Israel The Final Nail in WEP's Coffin Andrea Bittau, Mark Handley and Joshua Lackey University College London, UK, and Microsoft, USA
CRCS Workshop 2006: Data Surveillance and Privacy Protection * Can you find the terrorist in your database? * Do hospital admission records hold the secret to catching and confining Avian Flu outbreaks in humans? * What do banks really know about their customers? * What's the real purpose behind that RFID tag on your sweater? On June 3, 2006 Harvard University's Center for Research on Computation and Society will hold a day-long workshop on Data Surveillance and Privacy Protection. Although there has been significant public attention to the civil liberties issues of data surveillance over the past few years, there has been little discussion of the actual techniques that could be employed in any but the most restricted settings. Likewise, there has been little discussion of methods and technologies for conducting data surveillance while respecting privacy and preserving civil liberties. Today's newspapers and TV shows are preoccupied with NSA wiretaps and the accidental release of names and social security numbers. Meanwhile, a far more pervasive surveillance infrastructure is being created around us: the routine use of database information for law enforcement, counter-terrorism, and commercial markets. The Center for Research on Computation and Society (CRCS) is a new research center with a mission to develop a clear understanding of issues of technology and public policy where the actual technology makes a difference, and to pursue innovative computer science and technology research informed by that understanding. Some of the issues that we would like to explore at the workshop include: * Techniques for mining databases within and between organizations without exposing proprietary or privacy-sensitive information. * Techniques that are planned for deployment (or are actually being used) to survey hospital admissions data for evidence of epidemics or bioterror attacks. * Techniques that have been tried, or proposed, for finding terrorists or criminals through the examination of transactional information. * Techniques that could be used to automatically detect phishing attacks or other kinds of financial fraud. The workshop will take place on June 3, 2006. Registration for the workshop will open in early May. CALL FOR PAPERS AND PRESENTATIONS The CRCS Workshop Organizing Committee is looking for academics, government officials, business leaders, and individuals who are interested in submitting papers or making presentations at the June 3rd workshop. If you are interested, please send us a 2-paragraph abstract of your proposed paper or presentation. Send proposals to crcs-wkshp06@eecs.harvard.edu For more information, check out wiki at: http://www.eecs.harvard.edu/ crcs/wiki/index.php/Spring_2006_Workshop_CFP
Please report problems with the web pages to the maintainer