Okay, this post is going to be a sort of follow-up to the last one (which was about scientific research if you were wondering). Last time I talked mainly about how scientists (ought to) do research. But scientists (& research) are also influenced by factors outside of the lab (quite a shock, I know). To illustrate, let me sketch a picture of how research would work if it existed in a complete and idealist vacuum.
A scientist spots a phenomenon that interests them. After reviewing existing literature on the subject and integrating that with their own observations, they formulate a theory as to what makes this particular phenomenon occur in a certain context. They think of a way to variate what makes the phenomenon occur in a controlled environment, what should be measured, and which results would support their theory and which wouldn't. Then they run the experiment, collect the data, analyse the data, and report their theory, predictions, methods and results to their peers.
Obviously this is a great simplification and depending on the field and type of research things might work a little different, but I believe this example is functional enough to illustrate my point. Now let's take a look at how research might work in the real world.
A scientist spots a phenomena that interests them, either in the natural world, the lab, or somewhere in existing literature. Or they receive a grant to perform research in a specific direction (which may be broad or very specific). If not, they will either need to find funds or already have access to them. To make the research seem more valid or acceptable to peers, or simply to not have to reinvent the wheel, they might contrive their questions and/or predictions and/or methods from existing literature. Because the scientist has a job, the decision whether or not to actually execute the experiment will probably also be influenced by their perception of how publishable their report will be. If they run the experiment, they again collect the data, analyse the data, and report their theory, predictions, methods, and results (in the last post I went into some of the things that can go wrong or right in these last few steps). These are sent to a journal, which in turn sends the report to peer-reviewers (at least if it's a reputable scientific journal, I guess).
Now from what I've heard from some researchers, I'm using the term "peer" pretty loosely here. Because there are often competing explanations for any given phenomenon in scientific research, reports are often reviewed by researchers who support the other competing explanation, because they are assumed to be more critical of the research. Which is probably a valid assumption. But sometimes this practice leads to reports going unpublished or having to add or remove text, methods, experiments, only on the basis of methodological disagreements that are being fought out in the rather private setting of review instead of in the open in journals. Now maybe this is a phenomenon exclusive to the behavioural sciences, but I'm guessing other researchers are just as human as behavioural researchers. It might also be that I've been primed to this sort of information through previous lectures and such, or that this type of information is reported more often than all the successful publications or stands out more. But I digress.
If you compare these two situations, there may be quite some unwanted factors influencing what is being researched and how, and what is being published. Researchers are stuck in constraining science, where not scientific principles are dictating what to research as much as individual beliefs and career ambitions, organisational requirements and regulations, and publication requirements. There have to be funds to do research, and also "consumers" of said research in the way of journals and readers of those journals. I understand the need for some constraints, but I believe the ones in place may be the wrong ones.
Now I want to take a look at another business where information is (should be) the main currency (or main service might be a better word), the news. For many years, news reached the people in a top-down matter. Journalists travelled around the world in search of news and reported it when they found it. What I mean by working in a top-down matter, is that what was news and what was not was dictated by journalist and editors, and assuming they wanted to sell copies, their views of what the public wanted to know. Nowadays, everyone pretty much travels everywhere and is equipped with the means to report on any news they may perceive to be important. And, through the internet, if a lot of other people also find it important (or interesting, or amusing), it will be seen by millions all over the world. This has created a situation where news has a fair probability of reaching people in a bottom-up fashion. That is, actual participants in any kind of situation can report on it instead of mainstream media, and the mainstream media now often report on these situations if they attract enough attention online.
My hopes for scientific research lie in the addition of these bottom-up methods to the top-down methods already in place in science (that is, where money, journals and editors dictate what gets published, maybe more so than researchers do). And it seems I am not the only one!
Two great initiatives are (from my perspective) implementing a more bottom-up strategy.
Open Science Framework is a place where researchers can create and share research. Detailed information on all aspects of research can be shared, which increases transparency and thus (if it's solid research) validity. It possibly also makes it easier to replicate studies, and hopefully will stimulate discussion with interested peers at all stages of research without the usual restrictions of being in the same institute or being acquainted already.
PLOS ONE is a collection of online peer-reviewed journals that operate under the Creative Commons Attribution License, and on principle publish research from all disciplines of science as long as it's found to be methodologically sound. Now I have to add that there is a fee required to be published, but all this information is available to be read and shared for free! This is not only a good thing for scientists but for anyone interested in critically evaluating scientific research for themselves.
The former of the two deals with some of the problems I described in my previous post, while the latter ensures that how publishable a paper is is based on working with sound scientific methods instead of results.
Now all we need is some form of crowd-funding for scientific research. This may be my idealistic side again, but imagine that instead of donating money to (questionable) charities dedicated to finding a cure for a certain disease, or sponsoring one child to get a cure, there could be public fora where medical researchers place research proposals, other scientists publicly discuss the validity and such of said research proposal, and one could invest in these types of research, and maybe even the distribution of medicine if one is successful!
I accept that this view may be a bridge too far and may not be entirely realistic. But I still feel that these developments may one day make it easier to have research that is more independent, more unconstrained by the ideas of a few people of what science should be.
By liberating science from the top-down headlock it is in, we may start practising liberating science as opposed to constraining science.
B4D JuJu
games, crafts, books, (cognitive) psychology, neuroscience
Friday, February 21, 2014
Tuesday, February 4, 2014
Scientific Research in Psychology
This week, the topic for my class meeting is good research practices and in particular replications. For those unfamiliar with scientific methods, usually once a certain effect is found in one study, it is replicated by other researchers to confirm the effect. These researchers can copy the exact methods the original study used (known as, who would have guessed, an exact replication), but in psychology this isn't possible. Even if you could get the same subjects, they wouldn't be the same any more. So researchers could settle for a direct replication. This means copying the methods as exactly as possible, so one could employ subjects of the same age/gender/educational level, use the same stimulus material en presentation times etc.
This tells researchers if the effect actually exists and wasn't the result of chance. Once enough independent researchers find an effect it could be tentatively accepted. But it says nothing about the scope of the effect. Does it hold in all cultures? Are there gender- or age-differences? It could also be that other researchers think the original study had a methodological flaw, and that's why an effect was (or wasn't) found. These researchers may want to change the method a bit, to create a (in their eyes) more valid way to measure the studied concept. When researchers change things in the method but try to study the same concept in (slightly) different circumstances the replication attempt is dubbed a conceptual replication.
As you can hopefully see, replications are a vital part of scientific research. Any one study could produce a "false positive" (or "false negative", but I'll get to that later) through any number of ways without any bad intentions by the researcher, so no one study can provide sufficient proof for anything (something to keep in mind when you come across articles about scientific studies in non-scientific media as well). The problem is, some researchers in psychology find that it's hard to get their replication attempts published, even in journals where the original study was published! As one needs publications to maintain and build an academic career, you can probably imagine how many researchers in psychology attempt replications. Furthermore, studies that don't find the effect they're looking for hardly get published at all. Imagine how many researchers are probably performing replications without even knowing it because no published material is available on the subject!
So far I've talked about variables that influence findings without any fault by the researcher. But there are a number of things that researchers can do to influence results with or without intent. Research in psychology has found time and time again (harharhar) that ones thoughts, ideas and opinions influence ones perceptions, like is the case with confirmation bias. So even without intent one could lead subjects to give desired responses, exclude subjects who produced unfavourable data etc. On the former, researchers attempt to standardize interactions with subjects to reduce such effects, or double blind procedures are employed. But when it comes to the data, there is still a lot of standardization to be achieved.
During one of my practical lectures we received an assignment were we were asked to look through data and discuss which subjects we would exclude. Now excluding subjects is always ambiguous business. The data were collected online, which means there is a lot less control over the environment the subject is in. So subjects are given a colour-blindness test and some general questions about the subject of the study (what do they think it is) and the amount of noise and distraction in their environment. In the end, the teachers said that instead of excluding subjects solely on the basis of for instance their distraction score, we should always look at anomalies in that subjects data and only exclude them if they have any (and report it if the study is published of course). This seems to make sense, if their reactions times are all over the place, or they answered a lot of the easy things wrong, anything really out of the norm. Another thing we should do is run the analysis with and without the excluded subjects to see if the results hold. If they do, we can report the results without the excluded subjects.
This was were some doubt formed in my mind. I may have misunderstood or missed something, but it seems to me if the results don't change there really isn't any point in excluding the subjects at all, is there? After reading the required literature for this week, my conviction that this is wrong has strengthened. If we as researchers can't be sure that we're not unconsciously guided by confirmation bias, we shouldn't look at the data before we exclude people. So rules should be set up beforehand for excluding subjects (this is something the teachers mentioned too). In my mind, this means saying for instance that subjects with a distraction score of 3 or higher should be excluded, period. No looking at their data. If a researcher is afraid to "lose" too many subjects this way, one could make conditional rules, like: Subjects with a distraction score of 3 or higher will be excluded unless this leaves less than X subjects. If it leaves less than X subjects, subjects with a distraction score of 4 or higher will be excluded instead.
Also, testing with and without excluded subjects is interesting, but increases the chance for statistical error and leaves the door open for researchers to only report the most favourable results and simply stating that the other analysis was also significant.
Thankfully, quite some researchers are worried about the state of affairs and have proposed numerous ways to remedy the problems. I think data collection and analysis should be as heavily controlled for as is done for extraneous variables in experiments. What, and how it will actually be implemented remains to be seen, but if researchers in the field of psychology cannot use their extensive (although utterly incomplete) knowledge of the human condition to better research practice, it doesn't bode well for science as a whole.
Following is a list of the required readings for my class that partly inspired this post.
This tells researchers if the effect actually exists and wasn't the result of chance. Once enough independent researchers find an effect it could be tentatively accepted. But it says nothing about the scope of the effect. Does it hold in all cultures? Are there gender- or age-differences? It could also be that other researchers think the original study had a methodological flaw, and that's why an effect was (or wasn't) found. These researchers may want to change the method a bit, to create a (in their eyes) more valid way to measure the studied concept. When researchers change things in the method but try to study the same concept in (slightly) different circumstances the replication attempt is dubbed a conceptual replication.
As you can hopefully see, replications are a vital part of scientific research. Any one study could produce a "false positive" (or "false negative", but I'll get to that later) through any number of ways without any bad intentions by the researcher, so no one study can provide sufficient proof for anything (something to keep in mind when you come across articles about scientific studies in non-scientific media as well). The problem is, some researchers in psychology find that it's hard to get their replication attempts published, even in journals where the original study was published! As one needs publications to maintain and build an academic career, you can probably imagine how many researchers in psychology attempt replications. Furthermore, studies that don't find the effect they're looking for hardly get published at all. Imagine how many researchers are probably performing replications without even knowing it because no published material is available on the subject!
So far I've talked about variables that influence findings without any fault by the researcher. But there are a number of things that researchers can do to influence results with or without intent. Research in psychology has found time and time again (harharhar) that ones thoughts, ideas and opinions influence ones perceptions, like is the case with confirmation bias. So even without intent one could lead subjects to give desired responses, exclude subjects who produced unfavourable data etc. On the former, researchers attempt to standardize interactions with subjects to reduce such effects, or double blind procedures are employed. But when it comes to the data, there is still a lot of standardization to be achieved.
During one of my practical lectures we received an assignment were we were asked to look through data and discuss which subjects we would exclude. Now excluding subjects is always ambiguous business. The data were collected online, which means there is a lot less control over the environment the subject is in. So subjects are given a colour-blindness test and some general questions about the subject of the study (what do they think it is) and the amount of noise and distraction in their environment. In the end, the teachers said that instead of excluding subjects solely on the basis of for instance their distraction score, we should always look at anomalies in that subjects data and only exclude them if they have any (and report it if the study is published of course). This seems to make sense, if their reactions times are all over the place, or they answered a lot of the easy things wrong, anything really out of the norm. Another thing we should do is run the analysis with and without the excluded subjects to see if the results hold. If they do, we can report the results without the excluded subjects.
This was were some doubt formed in my mind. I may have misunderstood or missed something, but it seems to me if the results don't change there really isn't any point in excluding the subjects at all, is there? After reading the required literature for this week, my conviction that this is wrong has strengthened. If we as researchers can't be sure that we're not unconsciously guided by confirmation bias, we shouldn't look at the data before we exclude people. So rules should be set up beforehand for excluding subjects (this is something the teachers mentioned too). In my mind, this means saying for instance that subjects with a distraction score of 3 or higher should be excluded, period. No looking at their data. If a researcher is afraid to "lose" too many subjects this way, one could make conditional rules, like: Subjects with a distraction score of 3 or higher will be excluded unless this leaves less than X subjects. If it leaves less than X subjects, subjects with a distraction score of 4 or higher will be excluded instead.
Also, testing with and without excluded subjects is interesting, but increases the chance for statistical error and leaves the door open for researchers to only report the most favourable results and simply stating that the other analysis was also significant.
Thankfully, quite some researchers are worried about the state of affairs and have proposed numerous ways to remedy the problems. I think data collection and analysis should be as heavily controlled for as is done for extraneous variables in experiments. What, and how it will actually be implemented remains to be seen, but if researchers in the field of psychology cannot use their extensive (although utterly incomplete) knowledge of the human condition to better research practice, it doesn't bode well for science as a whole.
Following is a list of the required readings for my class that partly inspired this post.
Simmons,
J.P., Nelson, L.D., & Simonsohn, U. (2011). False-Positive
Psychology: Undisclosed Flexibility in Data Collection and Analysis
Allows Presenting Anything as Significant Psychological Science, 22, 1359-1366.
Nosek,
B., Spies, J.R., & Motyl, B. (2012). Scientific Utopia: II.
Restructuring incentives and practices to promote truth over
publishability. Perspectives on Psychological Science, 7, 615-631.
Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90-100.
Wagenmakers,
E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J. & Kievit,
R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632-638.
Brandt, M.J. (2014). The Replication Recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217-224.
Ioannides, J.A. (2012). Why Science Is Not Necessarily Self-Correcting. Perspectives on Psychological Science, 7, 645-654.
Thursday, January 30, 2014
MINECRAFT
It's funny how little graphics seem to matter for the enjoyment of a game. Since finally installing Steam on my pc I've mostly used it for playing indie games like Papers, Please and now Minecraft. It's a stark contrast with Tomb Raider, which (to me) is all about story (or linear gameplay) and pretty graphics. Yet somehow I find myself thinking about my base and what resources I need to collect to upgrade it, where I'll get them etc.
Needless to say, I'm hooked. I'll probably be spending way too much time perfecting my base and mastering the basics of crafting the next couple of months. Hell, who needs a real life anyway?
Needless to say, I'm hooked. I'll probably be spending way too much time perfecting my base and mastering the basics of crafting the next couple of months. Hell, who needs a real life anyway?
Robots, A.I. and the Turing-test
Today, I want to talk about a subject from my course. This course is about the foundations of cognitive brain research, which is pretty much where neuroscience and experimental psychology meet. The topic is intelligence, both artificial and human. In 1950, Alan Mathison Turing published a paper on this subject called "Computing Machinery and Intelligence". In this ground-breaking article he proposed a test (now known as the Turing-test) which would allow for the testing of A.I. If any machine could pass the test it would possess human intelligence, according to Turing. The Turing-test has changed somewhat throughout the years, but the principle is that either two humans or a human and a machine communicate without seeing each other. It's one humans task to find out if they are communicating with another human or a machine. If that human thinks they are talking to a human but it's actually a machine, the machine has passed the test. In the original test Turing proposed there are actually three participants, but the principle remains the same. In more than 60 years that have passed since, no machine has been able to pass the test.
One field of psychology that sometimes works with A.I. is cognitive psychology. Researchers in this field sometimes use the results from human behavioural studies to program A.I., both to test hypothesis about cognition and to try and create better A.I., I presume. For those of you who are unfamiliar with cognitive psychology, it's an experimental branch that uses mostly behavioural data to infer things about our thoughts, strategies and such and how they are influenced by for instance emotions. Embodied cognition is a sub-field of cognitive psychology. Where the traditional view sees our cognitions and brains much like absolute data and computers, which operate separately from our bodies (in a top-down manner), embodied cognition is all about bottom-up influences, how our bodies and perceptions interact with our cognitions. This field of psychology is now, like the traditional view before it, influencing how A.I. and machines are being made.
Let me try to explain this in other words. Traditionally created A.I. may be able to solve complex mathematical problems, but there isn't a chat-bot on the internet that can actually make you think you're talking to another human being (if you know of one, please let me know). Robots with traditional A.I. are barely able to navigate through a room. So although they may be intelligent, they are so in a 'cold' way. Something about them immediately informs us we're dealing with machines.
Now let me invite you to watch this TEDtalk by Guy Hoffman.
He is inspired by the field of embodied cognition to create robots with A.I. that 'feel' organic, or 'warm'. I don't know if you agree with me, but I think they seem to express certain emotions that instantly feel familiar. And judging by this video I wasn't the only one.
This made me wonder if the Turing-test is really valid the way it is. Maybe NOT letting the participants see each other actually makes it harder to pass the test. Now don't get me wrong, when you see a robot obviously you know it's not a human. But somehow the test as it is now reminds me of the traditional view of cognition, like the mind is something completely separate from the rest of the body. Like we only judge things with our mind, and we don't also use visual information for instance. To give an example, if someone attaches lights to al their major joints and runs around in the dark, you immediately know it's a human because of the way the lights move with respect to each other.
And it might also work the other way around of course. Maybe no A.I. can trick us in to thinking it's human without at least having a body that's similar to ours. How else will we ever be able to relate to one and other?
One field of psychology that sometimes works with A.I. is cognitive psychology. Researchers in this field sometimes use the results from human behavioural studies to program A.I., both to test hypothesis about cognition and to try and create better A.I., I presume. For those of you who are unfamiliar with cognitive psychology, it's an experimental branch that uses mostly behavioural data to infer things about our thoughts, strategies and such and how they are influenced by for instance emotions. Embodied cognition is a sub-field of cognitive psychology. Where the traditional view sees our cognitions and brains much like absolute data and computers, which operate separately from our bodies (in a top-down manner), embodied cognition is all about bottom-up influences, how our bodies and perceptions interact with our cognitions. This field of psychology is now, like the traditional view before it, influencing how A.I. and machines are being made.
Let me try to explain this in other words. Traditionally created A.I. may be able to solve complex mathematical problems, but there isn't a chat-bot on the internet that can actually make you think you're talking to another human being (if you know of one, please let me know). Robots with traditional A.I. are barely able to navigate through a room. So although they may be intelligent, they are so in a 'cold' way. Something about them immediately informs us we're dealing with machines.
Now let me invite you to watch this TEDtalk by Guy Hoffman.
He is inspired by the field of embodied cognition to create robots with A.I. that 'feel' organic, or 'warm'. I don't know if you agree with me, but I think they seem to express certain emotions that instantly feel familiar. And judging by this video I wasn't the only one.
This made me wonder if the Turing-test is really valid the way it is. Maybe NOT letting the participants see each other actually makes it harder to pass the test. Now don't get me wrong, when you see a robot obviously you know it's not a human. But somehow the test as it is now reminds me of the traditional view of cognition, like the mind is something completely separate from the rest of the body. Like we only judge things with our mind, and we don't also use visual information for instance. To give an example, if someone attaches lights to al their major joints and runs around in the dark, you immediately know it's a human because of the way the lights move with respect to each other.
And it might also work the other way around of course. Maybe no A.I. can trick us in to thinking it's human without at least having a body that's similar to ours. How else will we ever be able to relate to one and other?
Labels:
A.I.,
brain,
intelligence,
psychology,
robots,
science,
Turing-test
Monday, January 27, 2014
Contemplating and Games
Lately I've been contemplating restarting this blog. Or actually starting it, as I've only ever written two posts for it. I've been starting more and more different types of projects, and it may be nice to keep some sort of documentation on the progress. Topics might be stuff I learn at uni, games, my writing, stuff I make, books I read and series I watched, and maybe current topics or stuff I found on the internet somewhere. I don't really know how interesting it will be for anybody else, but at least I'll have a place to put my thought and retrieve them later. I'll probably pick one or two topic(s) to do weekly to start with, and just see where it goes from there. So yeah.
I got a new xbox 360 after my old one's hard disk crashed, and it came with Tomb Raider (also Halo 4, but haven't played it yet). I've been playing it a lot since then, and I think it's pretty striking that graphics still seem to improve even though there's a new generation out. I was really excited about that (the new generation) but right now I think it´ll be a while before I get one.
But, back to Tomb Raider. It´s filled to the brim with fricking quick timed events, but they´re reasonably well executed, in that they pull you in to the plot emotion-wise.
**SPOILERS** (minor game play spoilers ahead)
For instance, at one point in the game you're in sort of a landslide, and you're scrambling up a mountain. You have to alternatively push the left and right trigger to make Lara scramble up, which sort of reminded me of the Skate franchise, where real world motion is approximated by the controls instead of only moving a stick in a direction to make your character move in a direction. In Skate they sort of combine these features, where in Tomb Raider they alternate. So I'm frantically pushing the left and right triggers to make Lara scramble up the mountain, and every now and again some large piece of rock or debris comes crashing down the slope and you have to push the analog stick in the appropriate direction to avoid it. Or in another event, where Lara's hiding in a small shed in an enemy camp and gets discovered. Some burly guy starts to give her a *bad* touch, and you have to press Y at the appropriate moment, and if you don't you die. Then he does it again, and you have to press Y again, then cut-scene, Lara gets a gun and you get the chance to shoot him. In my experience (I had to replay the whole thing like five times due to not responding adequately) shooting him achieves fuck all, and to get through the next part I basically had to put the controller down, put my right hand on the left analog stick to move it left and right fast enough. I died at every step at least once, so when I finally got to blow his head off it was a pretty rewarding moment.
Game play-wise this sound pretty horrible (to me at least), but in the context of the game it seems to work reasonably well. The scenes are probably prettier as quick timed events than they would have been with regular game play, and they really pulled me in with the urgency of the situation.
**SPOILERS** (story spoilers ahead)
I haven't finished the game yet, but overall I like the storyline and game play so far. I guess I would dub it an adventure/third person shooter with rpg and puzzle elements. The rpg element mainly consists of upgrading your character and weapons, and I feel it fits really well with the character development. Because it's a sort of origin story Lara starts out as a reasonably helpless young woman who's horrified at the prospect of killing someone, but through the game she grows and toughens up and when her enemies, the hostile island inhabitants have killed off enough of her initial companions and she starts to call out she's going to kill them all I really shared in the emotion and was pretty pleased with my upgraded bow and my shotgun just blowing baddies away.
The puzzle element is rather basic in my opinion, but I like puzzle games. It does incorporate some nice uses of elements like fire and wind, and for those who don't like puzzles relevant items in the environment can be highlighted as hints. I think that's sort of missing the point of the puzzle but I guess it's nice if you're really stuck on one.
So overall, I think Tomb Raider is a pretty cool game with an appealing storyline. I guess they took some risks with the quick timed events game play-wise, but I think overall it pays off and adds to the story, through congruency with the story/context, and because some of them are genuinely hard, instead of the clip just sort of pausing to give you time to respond. Also, as you can read pretty much anywhere online, it depicts Lara Croft as a real female character with character growth instead of just the big tits and short shorts which I remember of the old pc TR games. Go play it.
I got a new xbox 360 after my old one's hard disk crashed, and it came with Tomb Raider (also Halo 4, but haven't played it yet). I've been playing it a lot since then, and I think it's pretty striking that graphics still seem to improve even though there's a new generation out. I was really excited about that (the new generation) but right now I think it´ll be a while before I get one.
But, back to Tomb Raider. It´s filled to the brim with fricking quick timed events, but they´re reasonably well executed, in that they pull you in to the plot emotion-wise.
**SPOILERS** (minor game play spoilers ahead)
For instance, at one point in the game you're in sort of a landslide, and you're scrambling up a mountain. You have to alternatively push the left and right trigger to make Lara scramble up, which sort of reminded me of the Skate franchise, where real world motion is approximated by the controls instead of only moving a stick in a direction to make your character move in a direction. In Skate they sort of combine these features, where in Tomb Raider they alternate. So I'm frantically pushing the left and right triggers to make Lara scramble up the mountain, and every now and again some large piece of rock or debris comes crashing down the slope and you have to push the analog stick in the appropriate direction to avoid it. Or in another event, where Lara's hiding in a small shed in an enemy camp and gets discovered. Some burly guy starts to give her a *bad* touch, and you have to press Y at the appropriate moment, and if you don't you die. Then he does it again, and you have to press Y again, then cut-scene, Lara gets a gun and you get the chance to shoot him. In my experience (I had to replay the whole thing like five times due to not responding adequately) shooting him achieves fuck all, and to get through the next part I basically had to put the controller down, put my right hand on the left analog stick to move it left and right fast enough. I died at every step at least once, so when I finally got to blow his head off it was a pretty rewarding moment.
Game play-wise this sound pretty horrible (to me at least), but in the context of the game it seems to work reasonably well. The scenes are probably prettier as quick timed events than they would have been with regular game play, and they really pulled me in with the urgency of the situation.
**SPOILERS** (story spoilers ahead)
I haven't finished the game yet, but overall I like the storyline and game play so far. I guess I would dub it an adventure/third person shooter with rpg and puzzle elements. The rpg element mainly consists of upgrading your character and weapons, and I feel it fits really well with the character development. Because it's a sort of origin story Lara starts out as a reasonably helpless young woman who's horrified at the prospect of killing someone, but through the game she grows and toughens up and when her enemies, the hostile island inhabitants have killed off enough of her initial companions and she starts to call out she's going to kill them all I really shared in the emotion and was pretty pleased with my upgraded bow and my shotgun just blowing baddies away.
The puzzle element is rather basic in my opinion, but I like puzzle games. It does incorporate some nice uses of elements like fire and wind, and for those who don't like puzzles relevant items in the environment can be highlighted as hints. I think that's sort of missing the point of the puzzle but I guess it's nice if you're really stuck on one.
So overall, I think Tomb Raider is a pretty cool game with an appealing storyline. I guess they took some risks with the quick timed events game play-wise, but I think overall it pays off and adds to the story, through congruency with the story/context, and because some of them are genuinely hard, instead of the clip just sort of pausing to give you time to respond. Also, as you can read pretty much anywhere online, it depicts Lara Croft as a real female character with character growth instead of just the big tits and short shorts which I remember of the old pc TR games. Go play it.
Subscribe to:
Posts (Atom)