Interaction Design Association

Asking questions to participants in a positive or negative way ?

chiwah liu
May 20, 2008 1:30am

Post a Response | Jump to Most Recent (23)

Hello,

If I want to do a survey to participant about, for example, usability of a product. Is it better to formulate questions in a positive way ? "Is this product is easy to use ?" or rather in a negative way "Did you experience difficulties using this product ?"
Or maybe I should ask them both, because a user could experience difficulties in a product but still find it easy to use.

What do you think about that ?
chiwah

 
Steve Baty
May 20, 2008 2:21am

Chiwah,

It's preferable to ask the question in a neutral way: "Consider your experience with this product. Did you find it:

  • very difficult
  • difficult
  • average
  • easy
  • very easy"
  • Alternatively, you could ask them to rate the usability from 0-5 or 0-10 etc.

    Otherwise, you do indeed risk biasing the responses. A really good book to read on the topic is 'Improving Survey Questions: Design & Evaluation' http://www.amazon.com/Improving-Survey-Questions-Evaluation-Research/dp/0803945833

    Regards

    Steve

    2008/5/20 chiwah liu <chiwah.liu at gmail.com:

    Hello,
    If I want to do a survey to participant about, for example, usability of a product. Is it better to formulate questions in a positive way ? "Is this product is easy to use ?" or rather in a negative way "Did you experience difficulties using this product ?"
    Or maybe I should ask them both, because a user could experience difficulties in a product but still find it easy to use. What do you think about that ?
    chiwah

    -- Steve 'Doc' Baty B.Sc (Maths), M.EC, MBA
    Principal Consultant
    Meld Consulting
    M: +61 417 061 292
    E: stevebaty at meld.com.au

    UX Statistics: http://uxstats.blogspot.com

    Member, UPA - www.upassoc.org
    Member, IA Institute - www.iainstitute.org
    Member, IxDA - www.ixda.org
    Contributor - UXMatters - www.uxmatters.com

     
    Alexander Baxevanis
    May 20, 2008 2:39am

    More importantly, the question you that should always be asked if you want to get any insights that will help you improve your product is WHY they think your product is (un)usable.

    On Tue, May 20, 2008 at 11:21 AM, Steve Baty <stevebaty at gmail.com wrote: Chiwah,
    It's preferable to ask the question in a neutral way: "Consider your experience with this product. Did you find it:
    - very difficult
    - difficult
    - average
    - easy
    - very easy"
    Alternatively, you could ask them to rate the usability from 0-5 or 0-10 etc. Otherwise, you do indeed risk biasing the responses. A really good book to read on the topic is 'Improving Survey Questions: Design & Evaluation' http://www.amazon.com/Improving-Survey-Questions-Evaluation-Research/dp/0803945833 Regards
    Steve
    2008/5/20 chiwah liu <chiwah.liu at gmail.com: Hello,
    If I want to do a survey to participant about, for example, usability of a product. Is it better to formulate questions in a positive way ? "Is this product is easy to use ?" or rather in a negative way "Did you experience difficulties using this product ?"
    Or maybe I should ask them both, because a user could experience difficulties in a product but still find it easy to use. What do you think about that ?
    chiwah

     
    chiwah liu
    May 20, 2008 2:40am

    2008/5/20 Steve Baty <stevebaty at gmail.com:

    Chiwah,
    It's preferable to ask the question in a neutral way
    Thank you a lot. That's exactly what I was looking for.

    Regards,
    Chiwah

     
    chiwah liu
    May 20, 2008 5:12am

    2008/5/20 chiwah liu <chiwah.liu at gmail.com
    I am thinking about bipolar scale. For example to ask users to rate a bipolar scale between "attractive" vs "un attractive"

    What do you think ?

    I am also thinking; would bipolar scales diminish the probability of positivity bias? I mean we could ask for example if a product is, for example, professional. And ask participant to rate it from strongly agree to strongly disagree. Participants would have a tendency to agree.

    But if I ask participants to choose between professional and amateur, they could not be agree of disagree, they just have to choose. And they have a better understanding of what is being measured.

    Is my hypothesis right ?

    Regards,

    Chiwah

     
    Christine Neidley
    May 20, 2008 7:51am

    Just a quick note: Your hypothesis sounds great. With Likert scales (even if they're using words instead of numbers to rate the participants response) try to use an even number of options. Four is nice. With four options, your participant must to decide between the two poles, but still has room to express the degree to which they agree.

    So instead of attractive/unattractive, you could have: attractive, somewhat attractive, somewhat unattractive, unattractive (This is just as you were saying in your hypothesis.)

    I go to a lot of websites that don't necessarily sparkle, but they aren't blaze orange with a looping midi of a Christmas carol. So I know that I'm always grateful for a little bit of room in the middle.

    One downside of being a Tech Comm graduate student, I have in fact had nightmares about survey reports. I got to breathe, eat, and sleep this stuff for a semester last year.

    Hope this helps,
    Christine

     
    mark schraad
    May 20, 2008 7:58am

    What you are prosing is called semantic differential. Think very carefully about the terms you use. it it not as simple as you might think. getting those terms right is the single hardest part of this technique and has the potential to radically skew your results. There is quite a bit of information out there now that you know what to call it.

    The problem with most lickert scale surveys (not like it, 1 - like it, 10) is that the survey will be pre bias towards an aggregate score of seven. It is really hard to get around that when people administer the survey and people are taking it.

    Mark

    On Tue, May 20, 2008 at 9:12 AM, chiwah liu <chiwah.liu at gmail.com wrote: 2008/5/20 chiwah liu <chiwah.liu at gmail.com
    I am thinking about bipolar scale. For example to ask users to rate a bipolar scale between "attractive" vs "un attractive" What do you think ?
    I am also thinking; would bipolar scales diminish the probability of positivity bias? I mean we could ask for example if a product is, for example, professional. And ask participant to rate it from strongly agree to strongly disagree. Participants would have a tendency to agree. But if I ask participants to choose between professional and amateur, they could not be agree of disagree, they just have to choose. And they have a better understanding of what is being measured.
    Is my hypothesis right ?
    Regards,
    Chiwah

     
    Jeff Gimzek
    May 20, 2008 10:56pm

    We actually used a 5 point scale in our rating system (Very Dissatisfied -- Very Satisfied ) specifically to give users a "neutral" option, and not force them to show a bias where none exists.

    Some people really just dont care, or have factors evenly weighted enough that they cancel out.

    On May 20, 2008, at 8:51 AM, Christine Neidley wrote: Just a quick note: Your hypothesis sounds great. With Likert scales (even if they're using words instead of numbers to rate the participants response) try to use an even number of options. Four is nice. With four options, your participant must to decide between the two poles, but still has room to express the degree to which they agree.
    So instead of attractive/unattractive, you could have: attractive, somewhat attractive, somewhat unattractive, unattractive (This is just as you were saying in your hypothesis.) I go to a lot of websites that don't necessarily sparkle, but they aren't blaze orange with a looping midi of a Christmas carol. So I know that I'm always grateful for a little bit of room in the middle. One downside of being a Tech Comm graduate student, I have in fact had nightmares about survey reports. I got to breathe, eat, and sleep this stuff for a semester last year.
    Hope this helps,
    Christine\

    --

    Jeff Gimzek | Senior User Experience Designer

    jeff at glassdoor.com | www.glassdoor.com

     
    chiwah liu
    May 21, 2008 12:30am

    2008/5/21 Jeff Gimzek <listserv at jdgimzek.com:

    We actually used a 5 point scale in our rating system (Very Dissatisfied -- Very Satisfied ) specifically to give users a "neutral" option, and not force them to show a bias where none exists.
    Some people really just dont care, or have factors evenly weighted enough that they cancel out.

    I don't know if I am right, but for me, the "neutral" option depends on the number of users :

  • If we don't have enough user to reach a statistical significance (let's say less than 100 users) for our survey, we should add a "neutral" option. The users who don't have any idea can bias the survey.
  • Now if we have enough user to reach a statistical significance (200-300+ users), we can force them to choose because they should give a random answer. That mean if my scale is between 1 and 4, I should have the same number of users that answer 2 than those who answer 3. If this case happens, then I can suppose that users don't really have idea about the answer. Otherwise, they might have preferences and it shouldn't be biased because it is be statistically significant.
  • What do you think?

     
    Caroline Jarrett
    May 21, 2008 7:27am

    From: "chiwah liu" <chiwah.liu at gmail.com : : I don't know if I am right, but for me, the "neutral" option depends on the : number of users :
    : - If we don't have enough user to reach a statistical significance (let's : say less than 100 users) for our survey, we should add a "neutral" option. : The users who don't have any idea can bias the survey. :
    : - Now if we have enough user to reach a statistical significance (200-300+ : users), we can force them to choose because they should give a random : answer. That mean if my scale is between 1 and 4, I should have the same : number of users that answer 2 than those who answer 3. If this case happens, : then I can suppose that users don't really have idea about the answer. : Otherwise, they might have preferences and it shouldn't be biased because it : is be statistically significant.
    :
    :
    No. I think the phrase 'force them to choose' shows exactly why this is a bad idea.

    You ought to allow users to have the opinions that they have - even if those opinions include 'don't know' or 'don't care' (or both).

    The answer options you offer should depend solely on the answers that your users want to give - not upon how many users there are.

    If you don't know what answers your users want to give, then interview them to find out before running your survey. And by the way - you should do that anyway (i.e., interview some users first) if you want anything like good results from your survey.

    There's a longer version of my views at:
    http://www.usabilitynews.com/news/article1269.asp

    Best Caroline Jarrett
    caroline.jarrett at effortmark.co.uk

     
    Chauncey Wilson
    May 21, 2008 5:24pm

    Caroline makes some very good points. Questionnaire design is complex and there are hundreds of articles debating the use of mid-points, the meaning of a mid-point, and other topics like how the order of questions influences answers. For many surveys, a Don't Know, Don't Care, or I Don't want to Answer (say to salary surveys or personal information) are all items that should be considered. If you are writing a questionnaire for a survey on a topic that you don't know well, doing some research beforehand to create the response categories is quite important so you don't have a lot of answers to your "Other" response category.

    There are several excellent books that delve into the issues of bias and the many design issue that you need to consider. I would recommend:

    Robson, C. (2002). Real-world research (Second edition). Malden, MA: Blackwell Publishing. This book describes many methods for gathering data including an excellent section on scale and questionnaire design. The book has a short, but excellent description, for example about how to develop Likert items.

    Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass. Thinking About Answers explores cognitive issues associated with survey methods. These issues include: context effects in surveys, order effects, event dating, counting and estimation, and autobiographical memory. The final chapter summarizes implications of cognitive research for survey design, administration, and interpretation.

    Dillman, D. A. (2007). Mail and internet surveys: The tailored design method 2007 update with new internet, visual, and mixed-mode guide. New York, NY: Wiley. This book is the third by Dillman who has written the most general book of survey guidelines.

    Aiken, L. R. (2002). Attitudes and Related Psychosocial Constructs: Theories, assessment, and research. Thousands Oaks, CA: Sage Publications. There are many books in social psychology that get into scale development. It is worth getting a book like Aiken or another book to understand the issues with Likert scaling, Semantic Differential scales, odd versus even scales, whether to label each scale point or only the end points.

    Chauncey

    No. I think the phrase 'force them to choose' shows exactly why this is a bad idea. You ought to allow users to have the opinions that they have - even if those opinions include 'don't know' or 'don't care' (or both). The answer options you offer should depend solely on the answers that your users want to give - not upon how many users there are. If you don't know what answers your users want to give, then interview them to find out before running your survey. And by the way - you should do that anyway (i.e., interview some users first) if you want anything like good results from your survey. There's a longer version of my views at:
    http://www.usabilitynews.com/news/article1269.asp Best Caroline Jarrett
    caroline.jarrett at effortmark.co.uk

     
    Katie Albers
    May 21, 2008 7:49pm

    I just want to emphasize strongly that you have to be very careful in constructing questions so that you're asking what you think you're asking. What does that mean? Well, my newest favorite question is "When you finished your transaction did you believe that the sales person successfully imparted his knowledge to you?" [no, really, they asked that. It was so bizarre I actually wrote it down.] My first (and continuing) reaction was that I had been more knowledgeable than the salesperson was when we started, and I now felt like he had succeeded in deleting knowledge from my brain (though I still knew more than he did) and I wasn't sure whether that was a (7) Completely successful or a (1) Completely unsuccessful.

    I make it a point when I have to construct surveys to submit the questions to a couple of the crankiest people I know in terms of language and willfully attributing meaning literally when you were thinking more figuratively and vice versa. Any question that does not survive that process I rewrite until it passes. Yes, I user test my user testing. sigh.

    kt --

    Katie Albers
    User Experience Consulting & Project Management
    katie at firstthought.com

     
    Chauncey Wilson
    May 22, 2008 3:10am

    I would consider Dillman to be the best overall set of guidelines for survey and questionnaire design and implementation. Dillman includes the processing of writing cover letters, recruiting respondents, and other issues.

    Chauncey

    On Thu, May 22, 2008 at 6:47 AM, chiwah liu <chiwah.liu at gmail.com wrote: 2008/5/22 Chauncey Wilson <chauncey.wilson at gmail.com: Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass. Thinking About Answers explores cognitive issues associated with survey methods. These issues include: context effects in surveys, order effects, event dating, counting and estimation, and autobiographical memory. The final chapter summarizes implications of cognitive research for survey design, administration, and interpretation.
    Dillman, D. A. (2007). Mail and internet surveys: The tailored design method 2007 update with new internet, visual, and mixed-mode guide. New York, NY: Wiley. This book is the third by Dillman who has written the most general book of survey guidelines. Aiken, L. R. (2002). Attitudes and Related Psychosocial Constructs: Theories, assessment, and research. Thousands Oaks, CA: Sage Publications. There are many books in social psychology that get into scale development. It is worth getting a book like Aiken or another book to understand the issues with Likert scaling, Semantic Differential scales, odd versus even scales, whether to label each scale point or only the end points.
    Chauncey
    Thank you for the books you recommended me. Is there a book that is particularly valuable? (Because I am not sure I could buy all these books.) I already have some knowledge about survey and psychometrics so I prefer like a book that go into detail.
    Best,
    Chiwah

     
    chiwah liu
    May 23, 2008 4:53am

    2008/5/22 Chauncey Wilson <chauncey.wilson at gmail.com:

    I would consider Dillman to be the best overall set of guidelines for survey and questionnaire design and implementation. Dillman includes the processing of writing cover letters, recruiting respondents, and other issues.
    Chauncey
    Thank you, I think I am going to buy this book.

    Chiwah

     
    chiwah liu
    May 23, 2008 9:21am

    You ought to allow users to have the opinions that they have - even if those opinions include 'don't know' or 'don't care' (or both).
    The answer options you offer should depend solely on the answers that your users want to give - not upon how many users there are. If you don't know what answers your users want to give, then interview them to find out before running your survey. And by the way - you should do that anyway (i.e., interview some users first) if you want anything like good results from your survey.
    Do you mean that when a user chooses "neutral" for a question, it has a meaning? And if most of my users choose "neutral", it means that my question is wrongly formulated? Then in both case should I interview them to know why they choose the "neutral" option?

    But in this case, does that mean that I should include for each question a checkbox asking if they don't care, don't know and if they felt sometime one aspect or another?

    Best, Chiwah

     
    chiwah liu
    May 26, 2008 2:22am

    Caroline said :

    The only way to find out is to interview some users to get a feeling for the types and ranges of opinions that they do have. Then you construct your questions. Then you test your questionnaire, and interview the test participants about it. By this point you have a good chance of getting a decent questionnaire put together and that's half the battle of a survey.
    Thank you for your answer. Our marketing department, with whom I am trying to work with doesn't do any one-on-one user research before creating a questionnaire. They just ask client what they want to be measured, reformulate it and the questionnaire is done!

    For the test, they just give it to us and we have to validate it… Which is not really a test

    Doing one on one user research first could be very time-consuming, what argument could I say to prove to both the client and the marketing team that it worth the value ?

    Best, Chiwah

     
    Caroline Jarrett
    May 26, 2008 3:53am

    Caroline said :

    The only way to find out is to interview some users to get a feeling for the types and ranges of opinions that they do have. Then you construct your questions. Then you test your questionnaire, and interview the test participants about it. By this point you have a good chance of getting a decent questionnaire put together and that's half the battle of a survey. Chiwah replied:

    : Thank you for your answer. Our marketing department, with whom I am trying to work with doesn't do any one-on-one user research before creating a questionnaire. They just ask client what they want to be measured, reformulate it and the questionnaire is done!

    : For the test, they just give it to us and we have to validate it… Which is not really a test : Doing one on one user research first could be very time-consuming, what argument could I say to prove to both the client and the marketing team that it worth the value ?

    You are where you are. I'd consider incorporating the marketing questionnaire as part of the test. I'd ask the participants to fill in the questionnaire for me but get them to explain to me, question by question, what the question meant to them, why they were picking the answers, if they felt the question was appropriate and whether you should have asked any different questions. Video it all.

    Maybe your marketing department is correct, in which case you'll get plenty of good material to flatter them with in the future and that will all help the working relationship. Maybe they aren't as correct as they hope, in which case you can go back to them and say: "Your questionnaire was great but we did have these minor difficulties with it here, here and here. Maybe next time we could do a couple of interviews with users first of all?"

    Warning: even if it's true, avoid going back to the marketing department with a message like: "I told you your questionnaire approach was all wrong and here's the evidence to prove it". That's a recipe for defensiveness, rejection, and all sorts of other bad stuff.

    As for time-consuming: it never ceases to amaze me that I meet such resistance to doing even a couple of informal interviews with users (say, half a day max) whereas organisations think nothing of sending out 1000 questionnaires just like that. Or even sending questionnaires to all their users!!! Strange, isn't it?

    Caroline Jarrett
    caroline.jarrett at effortmark.co.uk
    07990 570647

    Effortmark Ltd
    Usability - Forms - Content

    We have moved. New address:
    16 Heath Road
    Leighton Buzzard
    LU7 3AB

     
    Chauncey Wilson
    May 26, 2008 4:19am

    Caroline's suggestion about doing a think-aloud study of the questionnaire is excellent. I routinely do this as part of the questionnaire design process. I ask them to read it aloud and give me feedback about meaning, bias, instructions, wording, terminology, and anything else that comes to mind. For items with un-ordered response categories (for example job title), you might find that you are missing a key item. These think-aloud sessions can be short, say 15-30 minutes. Though it is a bit harder, you can do this over the phone or though remote collaboration software like GoToMeeting or LiveMeeting or other systems where you can display a question and hear the person.

    If you are using an electronic survey, you might get feedback about navigation, going back, required fields, etc. For example, I recently reviewed a survey where a single question listed about 10 fields for address,phone, etc. It turned out that all fields were required, even the one that was Addresss 2 which many people would not need to fill out. When the participant ignored that field and got a warning, it was confusing so he eventually typed some junk into Address 2 and could proceed. The software allowed you to make individual fields within a single question required or not which is good, but that feature is buried.

    Your test of the questionnaire might reveal missing categories or the wrong time reference or frequency responses. If you were asking about a CRM or financial system and your last response was "I use this a few times a day", your interview might reveal something like "Wow, I use this features 50-100 times a day". If you find out that a number of respondents make REALLY heavy use of a system, that is critical design input and may affect features for expert, high-frequency users. A few times a day is much different than 50-100 times a day.

    When you test the questionnaire (with people who are as close to those you will be sampling), watch for pauses and facial expressions and ask what people where thinking or what caused "that smile" or "frown".

    Having people read through a survey line by line and give you feedback is a variation on usability testing called the user edit or usability edit which is not too well known, but a powerful way to get feedback on procedural documentation (and questionnaires). Here are some references to the user edit method:

    Atlas, M. (1981). The user edit: Making manuals easier to use. IEEE Transactions on Professional Communication, 24:1 (March): 28-29.

    Atlas, M. (1998). The user edit revisited, or "if we're so smart, why ain't we rich?". Journal of Computer Documentation. 22:3 (August). ACM Press: New York, NY. 21-24.

    Schriver, K. A. (1991). Plain Language for Expert or lay audiences: Designing text using User Edit. (Technical Report Number 46) Pittsburgh, PA: Carnegie Mellon University, Communications Design Center.

    Soderston, C. (1985). The user edit: A new level. Technical Communication, 1st Quarter, 16-18.

    Chauncey

     
    chiwah liu
    May 26, 2008 12:52pm

    Caroline said:

    As for time-consuming: it never ceases to amaze me that I meet such resistance to doing even a couple of informal interviews with users (say, half a day max) whereas organisations think nothing of sending out 1000 questionnaires just like that. Or even sending questionnaires to all their users!!! Strange, isn't it?

    Hmmm. I work in a web agency and we are always very short both in time and in money. We choose almost every time a quick and dirty method to do our user research because our clients have never enough money. They also need to get the website very fast because the advertising campaign is going to happen within few months…

    For example, now if our client wants to test our prototype, we might have only about one month (or less) to do a whole user testing: survey the user, find them, call them, create scenarios, do user testing on about 20 users and of course give them the result and recommendation in less than one month… Because the advertising campaign is coming soon and we can't be late on schedule.

    So now I am so used to very quick and dirty user research that it rather sounds normal to choose the dirtiest and fastest method than the opposite.

     
    Jessica Enders
    May 27, 2008 12:36am

    Chiwah Liu says:

    "If we don't have enough user to reach a statistical significance (let's say less than 100 users) for our survey, we should add a 'neutral' option. The users who don't have any idea can bias the survey."

    I'm not sure what you mean by a 'neutral' option but it sounds like what you are referring to is a middle option on a scale.

    I think it is important to distinguish between a middle option (e.g. neither agree or disagree) and a don't know/no opinion option. A middle option can allow the users to say "I feel a little bit in the positive and a little bit in the negative, at the same time" whereas a don't know/no opinion option allows them to say they genuinely don't have a view.

    Research on the effect of a middle option on the distribution of answers is mixed. In some, but not all, research cases, the middle option did not have an affect on the overall relative proportion of positive and negative responses.

    As to users with no idea biasing the survey, I would ask what evidence you have of this? Presumably, if you don't provide a don't know/no opinion option, some users will resign themselves to a positive response and some will resign themselves to a negative response. I'm not aware of any research that suggests absence of an opinion leading to bias in survey results.

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Posted from the new ixda.org
    http://www.ixda.org/discuss?post=29233

     
    chiwah liu
    May 27, 2008 10:19am

    Effectively you are all right. I will try to convince the marketing department to do thinking aloud on questions (about 5-7 user I think) to validate the question.

    Do you also all doing one-on-one interview on users before creating a survey? And if you do, what kind of technique (or methodology) do you use to interview the user? Do you have some kind of guideline or something like this?

    Chiwah

     
    Jessica Enders
    May 29, 2008 6:53pm

    Think aloud evaluation of survey questions is commonly referred to as Cognitive Interviewing. The technique comes from the area of social psychology (where Dillman, Sudman et all that Chauncey mentions come from) and is one method of pre-testing a questionnaire. Other methods include behaviour coding and dress rehearsals.

    I was trained in the method of cognitive interviewing over the period of a week, full time, so it's not something you can learn well in a hurry. However, I plan to write an article on testing methods for forms and questionnaires for my website in the near future, so keep an eye out.

    In the interim, you could:

  • do a search for "cognitive interviewing" on the web
  • adapt think aloud methods that are used for testing websites
  • read one or more of the papers listed below
  • hire someone like myself (in Australia) or Caroline Jarrett (in the UK) to do it for you!
  • "New strategies for pre-testing survey questions" (Oksenberg, Cannell and Kalton) in the Journal of Official Statistics.

    "Cognitive Laboratory Methods: A Taxonomy" (Forsyth & Lessler) in Measurement Errors in Surveys.

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Posted from the new ixda.org
    http://www.ixda.org/discuss?post=29233

     
    Musstanser Tinauli
    May 30, 2008 8:39am

    My Research: Interaction Design and Experiential Factors

    The theme of this research is: IxD and Experiential Factors.

    The considered experiential factors include (Learnability, usage, error and feedback, Comfort, Collaboration, Affect, Guidence and support, Accesability). Depending on the kind of product some factors can become not applicable .

    The boundries of IxD here are pretty much similar to those defined in Dan saffer's IxD Relationship . I intend to device and experiment with a strategy so that we are able to evalute interaction design.

    Infact IXDA has been a great motivation to defend this theme and ideas.

    We look into different categorial case studies with the focus on IxD and associated factors (which I have termed as Experiential factors). The evolving model (IxD and Experiential model) has so far been applied to the following: Games, Digital pen and Paper and eLearning platforms.

    The idea is to observe these factors to understand better the model of IxD. This can ultimately help to give a measure of IxD as whole. It can also help us to understand the reltaionship between different activities in a produc and the product.

    Interaction Design? what interaction design: I often tell people that I am researching on interaction design and I want to be an Interaction Designer. The reply is: Ahh what exactly? What you think on that?

    Looking forward to suggesstions, critique and encouragement. The images can be found at:
    http://indaco.tinaulis.com/joomla/index.php?option=com_content&task=view&id=24&Itemid=35

    PS: I will be happy to provide more information. I would be pleased to provide more information and results.
    Anxiously waiting for yours replies.

    Musstanser
    PhD candidate INDACO, Politecnico Di Milano
    Interaction Designer cum IT Consultant @ Centro Metid, Italy.

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Posted from the new ixda.org
    http://www.ixda.org/discuss?post=29233

     
    Log In to Post a Response
    Re: Asking questions to participants in a positive or negative way ?

    Name

    E-mail Address

    Back to Top

    Copyright © 2004-2006 Interaction Design Association.