In this contemporary age, the world wide web along with technology is growing ever more location-based.  A decade or so ago, location wasn’t a important factor.  In many cases, people had to visit a specific location just to gain access to the internet, such a a library or school.  Today however, web-access is nearly global allowing people to access it from all sorts of mobile devices.  Because of this mobilization in conjunction with the evolution of technology, location emerges as a key factor.

This fairly recent development has created a whole new dimension in the internet culture, especially in terms of social networking.  These social media web sites encourage users to “check-in” with their locations and broadcast this to their “friends” or whoever is “following” them.

Users who take advantage of this feature do reap certain benefits.  Tying a physical location to one’s online persona bridges a gap between cyberspace and the real world.  This in turn helps to promote a sense of unprecedented social awareness.  Sharing a location helps people locate friends or potentially meet new friends at the same location.  “Checking-in” also serves a commercial purpose as well.  People who “check-in” at certain retail stores are rewarded with discounts for providing the store with a fast and easy way of advertisement.

However, there are plenty of drawbacks associated with this technology as well.  Some people may feel uncomfortable knowing that their location is being monitored constantly by a piece of technology.  This leads to issues concerning invasion of privacy.  And beyond that, some users may not wish to share their whereabouts for personal or social reasons.

There is nothing we may do to stop the advancement of such “invasive” technology; in fact it’s development can potentially be a powerful asset.  We should, however, be cautious and thoughtful concerning our personal information, including location.  If we’re not careful, the entirety of our privacy may be compromised without us even realizing.


Is the internet such a necessary tool that those unable to access it would be considered at a disadvantage?  And are the lengths taken to improve accessibility more beneficial than they are cumbersome?

Those that have major visual impairments are one of the most disadvantaged when it comes to using the internet.  This is due to the fact that the majority of the web revolves around visuals and text.  While there are such programs that read what is on the screen, it is an arduous process which requires a great amount of patience.  However, those who just have difficulty reading the small text, such as the elderly, can be easily accommodated.  The multiple ways to aid this are adjustments to browser zoom, text size, computer resolution, and the on-screen magnifying glass to name a few.

The population who have hearing impairments aren’t disadvantaged much online.  Many videos include closed captioning, and websites rarely feature audio that contains key information.  Some computers even feature visual accompaniments to important auditory alerts.  An example of this would be the screen flashing white when a  auditory notification is received.

In terms of dexterity disabilities, accessing the computer and the internet may prove quite difficult.  The keyboard  and the ability to type are integral parts of using the computer, and if one is unable to accomplish this, access may be nearly impossible.  The accommodations offered  in response to this is an onscreen keyboard.  In the event that a person cannot use the mouse or the keyboard, some computers feature voice commands.  However these prove to be cumbersome equiring users to use correct intonation and  speak and enunciate precisely.

Because of these limitations, users with disabilities may find attempting to access the computer and internet more frustrating than rewarding.  If the quality of these assistant tools improve, or  if technological innovations were created, then the disabled would be better suited to realizing the full potential that computers and the internet offers.

I hadn’t really given much thought to segregation and inequality online before.  I wouldn’t be so naive to say that it doesn’t exist, but I don’t consider it a foremost issue.  After some though, I’d say it’s as prevalent as it is in everyday life.  After all, the internet is merely a projection of our collective self into cyberspace.  When I had read the article For Minorities, New ‘Digital Divide’ Seen, I began to become suspicious of over-exaggerated generalizations and assumptions (Yes, my crap detector went on full alert).  The article mashed two near seemingly different ideas together with such logical stretches that even I became skeptical.  Although I am wary of the article’s credibility, it did bring up the topic of inequality online, which I hadn’t considered before.

Some areas of the world are still without internet access.  Some developing countries have limited connection speeds which limits the usability of the internet.  However in a way, I believe that this is a blessing rather than a curse as some might say.  This lack of a pestering cyberspace creates time for a greater sense of community and interpersonal relationships.  This then contributes in part to the digital divide.

The second half deals with segregation and racism online.  This is another part of the article that left me dubious.  It likened black usage of social networks to decades in which slavery was prominent and such close social ties had to have been maintained.  I see no evidence of such occurring and I’m sure more than one person might be offended reading such statements.  Yes, I do believe that social networks help us stay in touch, and they can promote getting in touch with people of similar ethnic backgrounds, but these websites also advocating meeting new people from across the globe.  While it is an individual’s choice in whether to take advantage of this, it is also their choice to participate in groups dedicated specifically towards their race.


Because the web is ever-expanding, it is difficult to delegate one agency to organizing it all.  This creates the perfect scenario for the birth of folksonomy.  Why bother creating an incomprehensive set of organization when there are masses upon masses of people ready and willing to take up this task.  For a person it’s a simple matter of tagging a few words to their published content, but this creates a useful search tool for the collective public.

Folksonomy allows for faster evolution and quicker responses to current changes.  This isn’t possible in a traditional hierarchical taxonomy which are slow to respond to current events.  This newer method also allows for people to discover new content through related tags which is easier to do than in traditional taxonomies.

This method isn’t the infallible solution; it does have its disadvantages.  While the tags can be as descriptive as users can generate them, there isn’t synonym implementation.  If someone were searching for “cats” and the actual photo they were looking for was tagged with “kitty” instead, it’d be impossible to find using that search.  And while users are encouraged to use popular tags, there is no real set hierarchy of organization.  Each term is placed directly next to other instead of in a pyramidal form.

While there are definite benefits to using folksonomies (it’s free), there are major gaps that prevent it from being the foremost method used.  Perhaps if there were a way to combine these two organizational processes, we’d be closer to the solution of classifying the multi-faceted web.

The content published within the world wide web is made up of valuable resources as well as general garbage.  Some garbage conceals itself behind fancy menus and pseudo-intellect.  The challenge is how to differentiate between the two.  Howard Rheingold, in his article Crap Detection, mentions several ways to determine a website’s credibility.  Some of his methods include identifying the author and his or her credibility, if sources are provided for factual information, and what others have to say about the website and its author.

Before I had known of the ways to determine a website’s credibility, I relied heavily upon my own instinct.  In retrospect, this routine contained huge pitfalls.  If I desperately needed information, I could will myself to see past obvious red-flags, and believe that the information was factual.  Now, with an arsenal of websites and improved comprehension of “crap detection,” I am well prepared to seek authenticity online.

I have put Rheingold’s methods to the test, both in and out of class, and have found that his process was surprisingly successful.  In an exercise to find sources for insufficient Wikipedia articles, I was required to search through numerous websites in order to find credible sources for certain statements.  For example, when trying to improve an article on 007 GoldenEye Reloaded, a newly released video game, I came across many seemly factual websites.  However, upon closer inspection, many had just paraphrased the Wikipedia article I had just reviewed, and added personal opinions.  It took me hours to discover a genuinely factual website containing the information I needed to cite the statement:

“Each computer player possesses its own AI-bot system to make them dynamic and challenging.”

Although the process was tedious, the satisfaction gained was well worth the effort.  Learning how to critically evaluate a website and its author in terms of credibility and factuality is a skill that will definitely be of use.


In our use of the world wide web, regulationof our activity isn’t something that is at the front of our minds.  Yet, in nearly everything we do online, one private company holds supremacy: Google.  This title has become something we’ve grown familiar with for over a decade but nevertheless, do not understand fully.

People should familiarize themselves with ways to protect themselves from Google’s point-of-view in order to defend themselves from potentially insulting or offensive content.  In the words of Ernest Hemingway, “Every man should have a built-in automatic crap detector operating inside him.”

Because Google controls vast majorities of the web, its regulation isn’t a small undertaking.  Its strength lies in the millions of people who rely upon its services daily and see no reason to find fault.  And in many ways, the company is deliving exactly what is being asked for: a moderated, fast, and safe browing experience.  In return however, user’s data is collected in order to personalize the ads that they see.  This big-brother-esque observation may be of legitemate concern to people who wish to protect their privacy online.  Regulation of such a huge multi-billion dollar entity such as Google is near impossible, and thus the theoretical alternatives that I see are wide-spread boycotts.  Speculating the creation of a public agency that regulates the web is idealistic due mostly to Google’s enormity and its loyal fanbase.

However, until Google compromises quality, I am willing to share the bare minimum of my personal infomation in exchange for the conveince of their service.  In fact, it would be more dificult for me to find alternatives in this internet world dominated by Google.

Usually I spend a good four or five hours online daily. Today however, my schedule was too busy to include more than a glimpse of Facebook and my email. But I did keep a journal of how I filtered each of the websites I visited. I never analyzed my browsing history in such a way as this, but I did discover some interesting characteristics of mine.

In keeping a keen eye attuned to how I used the internet, I discovered that my primary means of gathering information was using the search engines that are featured in nearly every websites. From there, I would click on the link that offered most accurately what I was looking for. For example, if I were searching for a definition I would use, or for just basic information about a historical event, I would use wikipedia. This is the most basic form of finding what you need online, and while it takes a bit longer and may not be as efficient, it is still one of my favorites to use.

The second filter I used were subscriptions to my friend’s various blogs, or to certain Youtube Channels. This filter brings you information that you want to see, instead of requiring users to find it themselves. This however can create a sort of “tunnel-vision” if people become too comfortable with it. Without using another filter people may not discover new areas and interests online.

While people knowingly filter their information online, some may not realize that it is being filtered for you as well. Take, for instance, Facebook. In a recent upgrade, Facebook marks news stories that they think may be of interest to you depending upon your relationship with the person and the content of their post. Youtube uses a similar filter by recommending videos that users might be interested in, based on videos that they have viewed in the past.

Understanding these filters is crucial because without this knowledge, we cannot fully know if what we are seeing is the entire spectrum. This knowledge works conversely as well. Comprehending filters enables users to use them to their advantage to maximize efficiency online.

Facebook – Searched for friends

Tumblr – bookmarked friend’s tumblr


Read the rest of this entry »

Paying complete attention is something that I cannot do well.  With today’s fast and instant entertainment, news, and anything you’d like, I find it difficult to focus solely on one thing only.  We were assigned to write two different summaries of Wikipedia articles, one in a high distraction environment and another with low distractions.

At first, I thought I’d enjoy the first more just because it was an assignment that allowed me complete freedom.  I could simultaneously be on Facebook, listen to music, and google whatever fantastical whims that happened through my mind.  However, I discovered that I became easily frustrated, and had no desire to do any of the activities I was currently involved in.  Reviewing my post, it sounds less coherent and more disjointed than my usual post.  In retrospect this is due to my attention jumping from the television back to my summary multiple times while writing one sentence.

I enjoyed writing the second assignment much more than the first.  I had the time to thoroughly read the article first, before mentally outlining what I wanted to write.  The amount of time I spent writing was shorter than I had anticipated too.  The wonders of what you can accomplish when you put your mind to it!

Without the constant buzz of omnipresent technology, I was able to write far more eloquently rather than trying to merely reword the article as I resorted to in the first activity.  Surprisingly it took me less time and seemingly less effort when I dedicated my full attention and mind towards what I was working on.  Having the availability of technology in this situation was more detrimental than helpful.  I was more apt to complain to my friends than use the resources I had to help me write a better summary.

Click to view original Wikipedia article.

Written by Audrey Niffenegger, The Time Traveler’s Wife is a fictional novel centering around Henry DeTamble and his life as he struggles to cope with an unusual genetic disorder.  His condition causes him to spontaneously time travel to past and future events in his life for a period of time.  The book also follows the perspective of his wife, and how she copes with his unexpected absences and the ensuing stress that this causes.  Niffenegger’s novel can be classified as both science fiction and romance as it portrays the themes of love, loss, and free will.
Audrey Niffenegger drew her inspiration from her own experience of failed relationships as well as her parent’s divorce.  She came up with the title from the epigraph of a 1964 novel, Man and Time:


“Clock time is our bank manager, tax collector, police inspector; this inner time is our wife.”


At first, Niffenegger had a difficult time finding an agent; She had to go through twenty five rejections before she came upon a small publisher, MacAdam/Cage who were enthralled by her work.  Her work was immediately successful debuting in ninth place on the New York Times Bestseller list and was met with near unanimous praise.

Her novel has had several versions recored as audiobooks and has also inspired a film adaptation staring Eric Bana and Rachel McAdams.  Audrey Niffenegger has gone on to write another novel, Her Fearful Symmetry, released in 2009 and is now working on a third.