Blog Post 14-Online Censorship

As one of the hottest topics in technology today, there are obviously many issues regarding online censorship. Morally and ethically, we certainly have problems with banning information to citizens just because a government is against it. We have even more problems with the draconian way that censorship is enforced. You can expect to be receiving anything from threats, to years in jail, to actual physical harm for simply visiting a website or sending an email. However, the even more drastic issues fall into the social realm. Taking information away from citizens will always have a negative effect on society as a whole. People will be less informed, technological innovation will suffer, and there will a lack in idea sharing across the country. The question of why governments would want to censor the internet, or really censor anything, is pretty simple. There’s information out there that the government wish you wouldn’t notice. Maybe the government has done something wrong, maybe there are other governments doing better than them, or maybe there are websites that are speaking against the government. These types of information are a threat to riots and an end to a governments existence, so it’s obvious why a government would attempt to remove this information if possible.

In today’s day and age, it’s going to take some seriously advanced techniques to cut an entire population off from information. The period we’re living in is called the information age for a reason. Information is everywhere, easily accessible by a normal person, and very hard to cut off. Looking at China, probably the most successful internet censor, gives us a good idea how this is possible. The first technique used is advanced technology to block users from certain sites. First of all, internet traffic is channeled through three checkpoints. This makes it much easier to monitor all traffic in and out of the country. Packets can be monitored to see if there is any subversive material, or if they are coming from/to a banned website. There is also the idea of self-censorship. China will demand that companies remove all information they deem undesirable, or the company and their website face being banned from Chinese users. Chinese also has strict punishments for attempting to go around censorship, from lengthy jail time to physical threats.

The ethical and moral debate gets a little murkier when the side gets switched to the companies providing the information. Are they ethically and morally responsible to fight against censorship and not give in to government demands? They face many issues if they do. Business will be lost in the most populated country in the world, they will lose favor with one of the largest governments in the world, and depending on the violation they could face jail time. I think it is ethical for these types of companies to go against censorship requests and to develop ways to get around censorship. This is because large technology companies are really the only group we have that can fight internet censorship. Individuals fighting against the Chinese government will not really have an effect on them. However, Google threatening to pull their services from the country certainly would. Thus I would say not only are these companies ethically and morally justified in fighting censorship, these companies are required to. They are our only hope.

Blog Post 13-Encryption is Important to Everyone

Encryption is an extremely important issue to me, and I honestly think that it should be an important issue to everyone. If somebody looks at the recent battle between the FBI and Apple and says that “it doesn’t matter to me,” they don’t understand how important encryption is in their daily lives. Imagine a world internet traffic was completely unencrypted. It would essentially be impossible to perform any sort of business on the Internet that required security. Digital financial transactions would essentially become unfeasible, as anybody who was listening in would have the private information of both parties. Business transactions would become impossible as well, as anybody listening in could obtain company secrets. Even visiting simple websites would become troubling, as without Authentication/Digital Signatures we would have no idea if the website we are visiting is a spoof. The internet would become useless without the security protocols behind it that many don’t even realize exist.

So anybody whose says encryption isn’t important to them either doesn’t understand it or has absolutely no interaction with the digital world on a daily basis. I tend to believe that the former is true. Those that say “they have nothing to hide” really mean that they have nothing to hide from the government. While I would argue that this probably isn’t true, my main point is that the government is not the only entity you are hiding information from. Even the perfect, upstanding citizen with nothing to hide from the government wouldn’t want their information in the hands of criminals. Encryption keeps our information from the government (sometimes) and criminals. There is no one or the other.

I would have to admit that while I am very much pro encryption, I probably have not let my stance affect my political, financial, and social actions as much as it should. Before this battle between the FBI and Apple, I did not even realize that encryption was a hot topic. I just assumed everyone understood it’s essential for most people’s daily lives. This was probably a little naive of me, as government agencies are going to have a problem with there being any information out there that they don’t have access to. Now that I am starting to realize the issue, I think I will have my beliefs affect my actions more.

Politically I think this is very hard to do. It is really hard to get any information on candidates concrete stances on encryption in the upcoming election. Almost every candidate has either not touched the issue, or has gone back and forth on it. Simply this is not an issue that is important to the average American voter, so it make sense that candidates would not spend that much time on it. It’s tough to figure out what a given candidate will do once they are in office. It is much easier to change financial and social actions. Financially you can stop supporting companies that give in to the government demands for backdoors. Although I don’t purchase Apple products on a regular basis, I will certainly support them less if they give into the FBI (or more if they don’t). Socially, the best thing I think to do when this argument is brought up is to make sure that all parties are informed. Make sure they actually know what encryption is, how it is essential to everyday life, and how they really do have things to hide.

 

Blog Post 12-DRMs and the DMCA

The Digital Millennium Copyright Act is clearly against circumvention and reverse engineering. Section 1201 of the act, the “anti-circumvention” provision, bans the “production of dissemination of technology, devices, or services intended to circumvent measures (DRMs) that control access to copyrighted works.” It also bans “the act of circumventing an access control (DRM).” Basically, it’s illegal to make tools that remove DRMs (digital rights management schemes) or to use these tools. This ranges from ripping a CD to a digital file, removing the DRM from iTunes songs, or creating “black box” mobile devices to work with any carrier. So it’s not only technically illegal to rip a CD, it’s could be considered illegal for somebody else to then listen to the audio file. The DMCA puts heavy restrictions on circumvention, reverse engineering, and users of circumvented/reversed engineering products.

The question then becomes, is it really ethical for companies to use DRM schemes? In essence it seems like you do not actually own the products that use DRMs. Do you really own a phone if you can only use it with one carrier? John Deere has essentially shown this is what the companies think as well. According to comments submitted to the US Copyright Office farmers don’t actually own John Deere tractors, they simply receive “an implied license for the life of the vehicle to operate the vehicle.” GM has made similar arguments, saying owners of automobiles do not own the software behind it and can thus not remove DRMs. This seems to be setting very dangerous precedents, destroying the idea of actually owning something. When buying a tractor or car, the dealer is not going to tell you that you’re paying thousands of dollars to simply license the vehicle. This line of thinking can be abused so that you never actually own anything that has software running behind it, and surprising types of products have software running behind them today.

Back to the original question, whether or not companies are acting ethically by using DRMs. I would say, like many of the topics we have covered, it depends how the DRM is being used. If the only purpose of the DRM is to prevent the illegal distribution of copyrighted materials, I am completely fine with it. It is clear that this was the original intent of DRMs in theory, but not how they are actually used in practice. The issue is how vague and inconsistent the language and interpretations of the DMCA is. The original intent of the law was not to completely deny ownership to users, but that is essentially how the law is used today. So I would say, in theory, companies could act ethically using DRMs. In practice, they almost never are.

Now to the other side of the question, whether or not users are acting ethically by building DRM removal tools or using these tools. Again I think it depends on the builders or user’s intentions. If they are simply trying to better a product they already own (i.e. unlocking their phone, fixing bugs, etc.) then they are completely acting ethically. However, if they are doing this to illegally distribute copyrighted software, I cannot say they are acting ethically. I do not buy the argument that this is ethical because copyright owners are wealthy, or because “everybody is doing it.” I think it is unethical no matter what. However, simply fixing a product you own is certainly ethical.

Blog Post 11-Copyright and Open Source Software

In Article 1, Section 8, Clause 8 of the United States Constitution, a copyright is defined as “the exclusive Right to their (Authors and Inventors) respective Writings and Discoveries…for limited Times.” In terms of copyright we are concerned with rights to authors for writings (while patents are rights to inventors for discoveries). The idea of copyright v.s. patent in terms of software is a little confusing, do we consider programmers “Authors” who make “Writings” or “Inventors” who make “Discoveries.” In fact, it is a little bit of both. U.S. Copyright law considers computer programs as “literary works” and thus can be granted copyrights. However, this provides limited protection. Copyright protects an expression of an idea, but not the idea itself. Essentially you would be preventing literal copying of source code, but not necessarily rewriting the code slightly differently to achieve the same end. If you want to actually protect the “idea” behind a program, so somebody couldn’t get away with just rewriting your code, a patent would be a better option. The line between these two is especially murky in software, and the law is being applied differently every day. Ethically copyrights are given to protect original, creative works from being copied and distributed, leaving the original author without just compensation. Economic reasons are making sure that these content creators receive the monetary compensation they deserve. In terms of society as a whole, copyrights make sure that content creators continue to create. There would be no reason to create original work if it could just be copied.

I wouldn’t call open source software “inherently better” than proprietary software. Open source works for some projects, while it doesn’t work for others. There are pros and cons to each. Some projects may lead to licensing issues if open source, you may have nobody on your team with open source experience, or you do not want to push software to undesired users. There are also positives for some projects, like less cost, more flexibility, and not getting locked into a single vendor. However, the main issue is the fact that anybody can look at and potentially change your source code. This can be a positive or negative. The positive is that you have more eyes on your source code, so there’s a much better chance of potential bugs being noticed and fixed. The other side of the coin is that these eyes might decide to exploit a bug instead of fix it. Many people claim that issues like HeartBleed and ShellShock could have been exploited by those who noticed them in the source code. Instead of fixing these issues, they were used for the user’s own end. Thus open source may not better. It essentially comes down to whether or not we trust our programming community. Do we trust them to fix or exploit bugs? I believe that, while there are these exceptions, the community as a whole will fix open source software rather than exploit it.

While the idea between open source and free software is very similar, the key difference is that free software puts more of an emphasis on always being able to modify and redistribute the code. A key idea of free software is maintaining copyleft, meaning that you only will distribute your software freely if anything derived by it can be distributed freely. This idea is not as important in open source. I would consider GPL more free then BSD, as it requires copyleft. I prefer GPL, as I think it is important to make sure that further works from your free software remain free. I also believe that governments and pubic organizations should pushed to adopt open source more than private companies. Since these projects are being funded by the community, there is a responsibility to leave the projects open to the community. In terms of using open source software, there is a responsibility you take on by using the software. The price of using the software is that you should attempt to fix it if you are able. Maybe this doesn’t mean spending your whole life devoted to the project, but if you notice a bug see if you can fix it or at least notify somebody who can.

 

Blog Post 10-Online Advertising and Big Data

The dreaded Terms and Conditions. We all know nobody reads them, even getting angry when they have to scroll to the bottom of the page to pretend they’ve read them. However many don’t realize that checking that “Agree” checkbox is signing away a valuable part of themselves, their information. Many of these Terms and Conditions give companies the right to do basically whatever they want with your information, whether utilized by themselves or sold to other companies. Suddenly similar products to what you just bought on Amazon show up in advertisements, or you’re getting emails from stores you just visited even though you never gave them any of your information. We live in a world where companies can know essentially everything about you, from your demographic to products you like to where you’ve been. Most don’t realize this occurs, or even if they do realize they have no problem with it. However, is this practice ethical?

The argument for this practice being ethical is fairly straightforward. The use of your data is an implied cost toward the service you are being provided. If companies couldn’t profit off your data, their services would be more expensive. In the case of “free” services, without profiting off user data these services would no longer be free. Most are ok with receiving personalized advertisements if it means Facebook remains free and Amazon continues to provide cheap shipping. I fall into this category. I find that if companies are using your data to “subsidize” costs to the user, and their practices fall in-line with their Terms and Conditions (and are legal), the company is acting ethically. While I am ok with the theory of online advertisement, we know that in practice companies do not always act this ethically. Often companies will act against their Terms and Conditions, thinking users will never find out. Maybe they’ll hide shady or outright illegal terms. Also, companies certainly have the right to not pass on savings to users, but it certainly seems they are acting less ethically.

I actually got a first-hand introduction to the methods used in data collection/analyzation in my data mining class. I was given user data, user demographics, and tried to come up with a model to predict something about the user given this data, whether or not the user utilizes a screen lock on their phone. Basically I wanted to figure out what type of people would forget to use screen locks (age, gender, etc.) so a company could know to send reminders to these types of people to use screen locks. This starts to fall into the uncanny valley of categorization mentioned in The Atlantic article; it would be a little unsettling to realize you’re the only person getting emails from company security because you’re the only person forgetting to use a screen lock. However, this information would certainly be beneficial to companies.

However, there is the dark side of data storage that Kate Kochetkova brings up in her article. We inherently trust every single company whose services we use to handle our data securely. Just by visiting a website we are “using their services” and often give them permission to store our data. There are probably a lot of company’s websites we visit that we wouldn’t necessarily trust to handle our data well. Legally, they may only be required to handle the data in according to their Terms and Conditions. Ethically, they are held to a higher standard. I believe a company should figure out what the companies they sell their data to actually do with the data. They shouldn’t blindly sell to any buyer. There is also an implicit responsibility to keep our data safe, however this is often a legal responsibility as well.

I do use an advertisement blocker, however now that I am thinking about it I am not sure if they are always ethical. Many companies are dependent on advertisements to survive, so it is not really ethical to use their services but deny them any “payment.” However, many of these blockers allow you to unblock advertisements for websites you believe deserve “payment.” I may start unblocking sites I use for free so they get they get the compensation they deserve.

Blog Post 9-Government Backdoors

The use of encryption is on the forefront of today’s national defense debate. After recent terrorist events, which may have been organized through encrypted channels, many are shocked and outraged that there are ways that terrorists can hide their information such that no one, not even the government, can access it. Before these events, many people had probably never even heard about encryption before. They probably don’t understand that it’s something that they use on a day-to-day basis, whether through the internet, mobile phones, etc. Now to many it has an evil connotation. And we are left asking what should the companies proving these services do about it. Is a company more ethically responsible for their user’s privacy or society’s safety?

The main argument for companies providing a backdoor to governments is that encryption does not allow governments to execute warrants for information. This argument makes sense to me. I would be completely against allowing the government to access our data at any time. However, allowing the government access to our data when a court warrant is served seems justified. Manhattan district attorney defended this point in a recent paper, stating “Last fall, a decision by a single company changed the way those of us in law enforcement work to keep the public safe and bring justice to victims and their families.” If we can issue a court warrant and legally obtain physical evidence in a case, we should certainly be able to do this with digital evidence. However, it appears that this is impossible.

In previous encryption systems, Apple would retain a key that could be used to unlock customer’s data in the case of a search warrant. This seems perfect, a user’s data would be secure except in the case of a warrant. However, we do not live in a perfect world. What Apple found is that creating this backdoor, even if designed only to be used by them, left their system vulnerable. Eventually somebody would find a way in. This is the root of our issue. In a perfect world we could create a system that would be completely secure, except in the case of a search warrant. However, it is not possible. Once we create a vulnerability in the system, eventually somebody will exploit it. So in a perfect world, Apple could balance its ethical responsibility to privacy with its responsibility to security. Sadly, the answer to our question is not that easy, so we have to choose a side.

I have to side with Apple in this argument, that they are more responsible for their user’s privacy then what their user’s do with their system. When the argument is brought up that “isn’t saving lives or protecting our nation worth a little less individual privacy”, I say that we are not losing a little less privacy. We are losing all our privacy. Even if we trust that the government will only use the backdoors legally, we take the chance that a malicious party will exploit it. When the argument is brought up that “if you’ve got nothing to hide, you’ve got nothing to fear” I say that everybody has something to hide. Even a completely ethical person may have credit card information, company information, etc. on their mobile phone. Many people right now would say national security is worth their privacy, but if you tell them their identity might be stolen I think they would change their mind. National security is very important, but maintaining digital privacy is something that is basically necessary for most people to live normal lives. Thus I think protecting our privacy is more valuable than protecting the chance that encryption is used against national security.

 

Blog Post 8-Our Job Interview Guide

To me, the most important section of our guide is the “Things I Wish I Knew as a Freshman” section. This highlights the main points from our guide that we each felt would be the most beneficial to a Notre Dame Computer Science or Engineering freshman reading our guide. This is probably cheating a little bit for this question, so I’ll look at some of the other sections of our guide I feel would be very helpful to an incoming student. One key section is the “When to Start Preparing” section. This has some advice that I could have certainly used, as I probably started preparing for my career a little to late. The “How to Prepare” section would also be very useful. I took a “learning as you go” approach to preparing for my interviews. I didn’t really know how to prepare for my first couple of interviews, but learned through the experience of these early interviews. This meant that I knew what I was doing by the time of my later interviews, but probably could have performed better on the early interviews if I knew how to prepare better. This section had some great tips, such as researching the company and position beforehand so you can answer certain questions better. It also had some great resources for preparing for technical interviews. Probably the best advice I got for this is to utilize the mock interviews that the Career Center runs. I didn’t know these existed (probably because I ignore too many emails). These would have been great to work out the kinks of interviewing and learn how to best prepare.

I think the University’s Computer Science and Engineering department prepares students well for the workforce. I think it is tough for a department to completely prepare students. Since Notre Dame isn’t a technical school, it really isn’t fair to spend an excessive amount of time preparing students for a certain type of job. Students will have too varied of a future career path to do this. As a place of learning, I think the University should not focus too heavily on preparing us for work. It should be a goal, but scholarly pursuits should be the main focus.

While not a main goal, preparing students for the workforce has to at least be a goal. It would be a disservice for students to be left completely in the dark for what will be required of them in the “real world.” I think the department does a good job with this. Professors will often stress skills that will be the most important in the workforce. There was one complaint I had for the department that they have actually fixed for later graduating classes. I often felt that the order of our classes didn’t really fit well with the skills that we needed for technical interviews. The main problem is that we take our “Design and Analysis of Algorithms” class our senior year, while many people are taking technical interviews. The issue is that this is one of the main classes that are tested in these interviews and we have not completed it by that point. However, future graduating classes will be taking this class earlier. I believe this shows that the University does care about preparing students for interviews and the workforce.

 

Blog Post 7-Therac-25

There were two main causes for the Therac-25 accidents. One was the actual software controlling the machine, which had bugs that would cause fatal errors. Another key issue was that there was no hardware responsible for handling fatal errors, which basically all safety-critical systems have. There were no interlocks or last-ditch hardware techniques to prevent catastrophic failures. These two causes combined to show us a side of software engineering that many people don’t realize exists, that software engineers are often responsible for the lives of others.

Therac-6 and 20, the previous versions of the machines, were reliant on a combination of software and hardware. Thus there were hardware interlocks in place to prevent the user from doing anything catastrophic. There was also software present that allowed for faster setup of the machine, but there were still these hardware safety measures in place. However, in the Therac-25 iteration, the decision was made to rely on software only. The hardware safety measures were completely removed, leaving the system vulnerable.

It took a while to determine the cause of the software issue; it was something that the maker of the machine couldn’t reproduce. However, an actual user was eventually able to reproduce the error. If a user selected “X-Ray mode”, the machine would take 8 seconds to set up. However, if while setting up “X-Ray mode” the user switched to “Electron mode”, the turntable would not switch over and be left in an unknown state. This seems like a simple error that would easily come up through rigorous testing, but it turns out the testing wasn’t that rigorous. There was no timing analysis performed, which would have caught the error. The fact that testing wasn’t rigorous on a safety critical system is certainly a huge issue. This issue was fixed in an update, but then another patient was overdosed due to a completely different error. The company behind the product seemed completely inept at making safety critical systems.

As we can see from this case, there are certainly unique issues that software developers face when working on safety-critical systems. Most people don’t really think of developers as being responsible for people’s lives; they think of them as people that make games or websites, where bugs will at worst cause monetary loses. However, people don’t realize that today software is basically in everything, certainly including safety-critical systems like airplanes or pacemakers. The software developers behind these projects are responsible for people’s lives. I think a main challenge for these developers is to not become too detached. It is very easy when coding to forget that a mistake you make could lead to loss of life! This is probably not the first thing you are thinking of when typing code into a machine, because you are so detached from the actual user, but you must remember this.

However, I believe most of the responsibility is bore by the project manager. It was the project leaders of AECL that were responsible for people’s lives. They couldn’t cut corners on developer skill or testing level. One of the first thing a good project manager will do at the beginning of a project is determine the risk of the project. This is so they can determine what level of corners they can cut. On safety critical systems, this is essentially zero. So it’s tough to blame the unexperienced and unqualified programmer in the case of Therac-25. The blame is on the managers who decided to use this type of programmer and decided to not test rigorously.

Blog Post 6: Diversity in the Tech Industry

Is the lack of diversity a problem in the technology industry? I think the answer to this question depends on the scale of the problem you are attempting to tackle. Are you looking at the diversity problems of individual firms in the technology industry, or are you looking at diversity in the industry as a whole? I believe that both problems are something that need to be addressed, but come from different causes and thus have different solutions. If an individual firm is having a diversity problem, like having significantly lower numbers than the industry average, the firm must look inward to see if there are any cultural biases. However if we are addressing the industry wide problem, like the technology industry having lower minority representation then other industries, we must look at our culture as a whole. What ingrained problems do we have in our society to cause these issues?

There are certainly cases where companies can fall behind industry standards in terms of minority representation. It’s on the company to recognize a problem, which is often the hardest part, and then rectify that problem. This problem can be very hard to address, as it can be deeply ingrained in company culture. The Forbes article gives a good framework for attempting to fix this problem in a company’s culture. The first step is to get an outside source to assess the company’s culture for discrimination. This is extremely important. Since the problem stems from inside the company, attempting to fix it with only company resources might prove fruitless. Outside training is often important, as many employees will have cultural biases without even realizing them. The rest of the steps deal with programs aimed at improving minority workers already at the company, but I want to focus on these first two steps. Programs to aid minorities are certainly important, but I think the programs focused toward’s non-minorities are more pertinent to our problem. The first step is to remove the cultural biases at a company, allowing the company to achieve higher diversity. Then a company can focus on its minority programs.

Now when the scope of our question changes from individual firms to the industry as a whole, the cause of our problem changes and thus our solution must be different. Unlike a problem at an individual company, the problem at the industry level isn’t necessarily a fault of the industry. There may be cultural problems inherent in the tech industry, but I do not believe that is why we see such a large lack in diversity as compared to other fields. I believe it it is because of cultural problems across society as a whole. There is certainly a stigma of the typical “tech person”, and there are certainly groups that are left out of this stigma. One example is women. Since society typically thinks of women as less tech-oriented then men, we see less women pursuing tech fields. It’s not because women are “worse” at tech fields, is that many don’t even think of it as an option. This is a cultural problem that doesn’t need to be addressed at the industry level, but at academia. I think Notre Dame has done a great job at this. There are many resources available for women in STEM fields, and resources that attempt to recruit women into STEM fields. In my personal experience I think it has helped. To me at least, it appears that there are more and more women studying computer science since I have been at the university. In summary the industry wide problem is not a problem that has to be solved at the industry level, it needs to be addressed by academia.

Blog Post 5: Why Startups?

This past semester I actually took a class on entrepreneurship and startups. It was a very interesting class that gave me an insight into the mind of an entrepreneur and why startups are all the rage right now. The positives when compared to a standard job are certainly convincing. You have total control of the company, taking your startup in a direction that seems interesting to you. You’re not going to end up a “cog in the machine” where you feel that your work is pointless. And there’s much greater growth potential in terms of salary and wealth. You could see the excitement in the eyes of the entrepreneurs that came and spoke with us. They really believed they were making a difference day in and day out, something that you might be hard pressed to find in a large company employee.

However there’s the darker side of startups that we really weren’t introduced to in this class, the failures. We never had someone come in whose startup had completely failed (most likely because this person wouldn’t exactly want to speak in an entrepreneurship class); it would have been nice to remind us that the majority of startups fail. Some people, the true entrepreneurs, are willing to take this massive risk early in their careers. I am not.

Not everybody is an entrepreneur. Some people, like me, feel the negatives outweigh the positives. Having a secure paying job outweigh the chance of a startup exploding in growth. Having a secure paying job outweighs getting to run your own company and the power that comes with it. I also believe that some of these positives can be greatly overstated.

As shown by the danluu.com article, even if you join an eventually successful startup you could have been more wealthy joining a large company. There’s still huge inherent risks, even if the company becomes successful, based upon how an IPO goes or your individual contract. There’s also the idea that you will be performing meaningless, uninteresting work if you don’t work at a startup. I definitely disagree with this. If you join a large company you will always be able to find meaningful, interesting work. It’s on you to find that work and move toward it in the company or when job searching. The danluu.com article shows that large companies are often doing the most innovative work in the industry. It’s up to you to find this type of work in large companies, and it can certainly be done.

In my post-graduation job searching, I really didn’t have any desire to work for or begin a startup. At this point I don’t believe the risks outweigh the benefits for me, but I can certainly see why they would for an entrepreneurial type. I was looking for more of a balance between a startup and a large company. I was looking for a small company where you didn’t feel like a “cog in the machine”, but stable enough that it didn’t really have a chance of going under during my employment. However, in the future when I am hopefully more financially stable, I could see those benefits starting to outweigh the risks. Maybe if the right opportunity arises I will join or begin my own startup.