Search This Blog

Tuesday, July 1, 2014

A Discussion on the Ethics of Experimenting on a Synthesised Human Intelligence

An article in Scientific American I read a few years back recently popped back into my head. I apologise as the full article is not available, but what is visible gives enough context for our purposes. The potential dilemma emerges when we imagine if it is ethically sound to experiment on a simulated human intelligence (SHI). It's been discussed that as these SHIs become a more accurate facsimile to the physical thing they will become excellent tools for modelling, and potentially curing many mental disorders and ailments. Is it ok to give this potentially conscious entity schizophrenia? Huntington's disease? There are two approached we can take. Utilitarian, or deontological. The utilitarian approach would be that the benefits gained from discoveries regarding brain function would outweigh the potential suffering the SHI may endure. The deontological stance would argue that it is always bad to actively cause suffering to another conscious being. Utilitarianisms offer is quite attractive, but I wonder if logically it's a slippery slope to arguing for experimentation on actual humans. The thought seems ridiculous but do we not devalue the ideas of sentience and sapience if we experiment on a potentially conscious being without it's consent or control? Why is alright to do so on an SHI and not a child? Or someone with a mental handicap? Indeed, the gut reaction is that the SHI is somehow less real than physical human, or lacks true consciousness so somehow it isn't the same. Unfortunately there are no grounds to support this. At it's very base even the utilitarian calculation would render unwieldy results. In one scenario, if the SHI must undergo multiple thousands of runs to obtain a proper data set, it's total suffering would weigh heavily against the benefits gained. Considering the above, the deontological route appears the most reasonable course of action. It is essentially already practiced as society generally recognises that unnecessary harm and distress, as well as unconsenting experimentation on any person is not acceptable, even with the potential of great advances by doing so. It would be prudent to hold the same true for an SHI. If the facsimile is true enough we must assume the SHI has the potential to be as conscious as you or me, and therefore, protected by human rights. In conclusion, an SHI should be treated as no different than a living breathing person. Safe under the same rights guaranteed to it's creators.

Monday, June 9, 2014

A Future Machine Since the Dawn of Time

   Imagine if you will, that some form of advanced civilisation is not only able to travel through time, but also build through the expanse of time as easily as we can now build through the volume of space. If the structure were simply an observer or device used for observation, it could view all events just as a satellite can view the Earth's surface. One of the things I find most fascinating about such a structure/machine is that it could be built at any point in time, and once constructed, could potentially instantaneously exist at all points in time!


   For example, say the discovery on how to make such a miraculous thing doesn't come into existence for another ten thousand years. For the sake of having some sort of a system/visualisation let's say that the device is grown like tendrils through what you could analogise as temporal space. Once it had grown across all of this temporal expanse, from any given point in time if you could see it, it would look like it had always existed, despite not actually being built and spread until a much later "relative point" in time (relative in the sense that from the devices perspective everything is atemporal, there is no past, present, or future. All of time is laid out like a map before it).


   Just a fascinating idea I wanted to share for today!

Thursday, May 8, 2014

Living in a Post Automation World: A Nietzchean/Utilitarian Reanalysis

It's been a while since my last post! School became rather heavy and I needed to prioritise. I am back now and hopefully with a vengeance!

   As of late I've been listening to a very entertaining, engaging, and insightful philosophy podcast called The Partially Examined Life

   For anyone interested in philosophy of all sorts from any educational background I would highly recommend it.

   As someone myself with no formal philosophy background (simply interested in ideas and ponderings of all sorts) I would like to comment on one of my previous posts, namely as the title of this post suggests "Living in a Post Automation World". This is spurred by my listening to the brief analysis of Nietzche's works. I would like to stress that I am eager to hear any and all comments relating to this as I am certainly no expert, and I look forward to hearing what you may think.

   To briefly review, I asserted in the relevant post that automation is expanding rapidly in both scope and ubiquity. Although this increases overall productivity worldwide, it currently has a displacing effect on the human workforce by and large, which in the long run potentially undermines peoples capacity to purchase produced goods (if they are unable to find employment). My solution in a nutshell was to allow workers directly displaced by the purchase of a given machine to then be allowed to purchase a share of that machine's productivity, thereby allowing displaced workers to collect a portion of that machines financial productivity, while also contributing to the purchase and maintenance of that machine. In the long term this could lead to a society where most people would collect wages from machine productivity without ever having to actually work (Please read "Living in a Post Automation World" to get the full idea if you haven't already).

   I would now like to introduce some Nietzchean concepts to elucidate the benefits of my position, but in a materialistic, and perhaps counterintuitevly, a utilitarian position.

   Nietzche talks about the shift from a "Master Morality" to a "Slave Morality". Master morality is such that the powerful, strong, and noble exert their will on the weak ("slaves") to promote and realise their goals, and ultimately what is good for them (specifically the individual). The slave however, is a reactionary force. The slave is interested in creating a system opposed to the master's. One based more on communal equality, and sacrificing oneself to the benefit of their peers. As Nietzche alludes, the dynamic of the slave morality unchecked has the potential to stifle greatness. One's personal aspirations must be second to that of the whole.

   According to Nietzche, we currently live in a world dominated by the slave morality (Judeo-Christian moral system). I would have to say though that simply put it would be unacceptable to enter into a master system in the traditional sense nowadays as exploiting weaker people for one's personal self expression is beyond unethical. Where this plays into a world of pure automation though is that with machines performing labour on people's behalf, essentially fulfilling the "slave" portion of the equation, it would free up more time for the pursuits that people truly wish to develop. This would lead to more great achievements as people would now have the time to create works that were not permitted before due to the excessive time and energy drain of labourious tasks. Indeed, there may even be a large portion of the population that choses not to pursue any significant development. Those that are motivated to though would be unhindered to perform great feats, at the benefit of society as a whole, enriching humanity. Every person would be in a position to live the "master" life at his or her discretion, without the exploitation of other people. In this sense we can truly achieve a balance in the master-slave dynamic. Utility is maximised for all, along with freedom of self expression.

   Another important point I would like to briefly express is that Nietzche describes society as only as good as the amount of "parasites" it can accommodate. In an automated society, the amount of "parasites" that could be accommodated is only limited by the productivity of the machines, accessibility to said productivity, and resources. All three of which are completely under humanity's control. Machine productivity can be varied according to society's needs, accessibility would need to be publicly checked to ensure that all have their basic needs appropriately met, and resources are subject to human ingenuity in gaining them.

   In closing, there are points that Nietzche makes that I find distasteful (particularly his views on democracy), and this is simply my interpretation of a small set of Nietzche's philosophy as applied to my particular ideas. I do however think that we are in a position to realise a society in which people are free to achieve their goals and greatness, while still maintaining the good of the whole.

I encourage analysis of your own below! Thanks for reading!