Feeds:
Posts
Comments

Archive for July, 2011

EDTech Entry #5

I have learned over the past week about evaluation has to do with “media study comparison.”  It seems that they are not that helpful and can only be applied in very strict terms.  It also helped me understand what I need to do with my final project.  I cannot focus on comparing what students learn using iPads compared to what students would learn just using a book.  I need to focus on measuring changes in 21st century skills and the overall opinions of the students that are in the class.  The data on using the last new technology compared to the old technology has always been proven to show any improvement in results.  I can see how it would be temping to want to compare one medium to another medium.

I also read up on norm-referenced and criterion-referenced tests.  It is interesting that they are completely different in what they test, but yet I feel that people in the educational field really don’t understand the difference.  We in the state of Iowa uses norm-referenced test to report out for No Child Left Behind.   NCBL is suppose to show proficiency for students, yet the stat uses a norm-reference test were 40% of the children will always be below proficiency because it is a norm-referenced test.  It makes not sense.

I’m looking to forward to finishing my last class in the EDTech program.  I just now need to finish my final project.  It is going to be an epic project for me, because I really do think that I’m going to struggle due to my project being different from the others.

Advertisements

Read Full Post »

EDTech 505 Entry #4

After working with my partners on the second stag of the project, I had a much better idea of what a proposal should look like.  It is a much easier to understand the process through working with the other members on the project.

I’m looking forward to reading the proposals the other students selected.  My selection was called an evaluation, but I don’t really meets the requirement of an evaluation in hindsight.  It is much like the research vs. evaluation, people label a study an evaluation but it is more of a summary report of what people thought.  I assume that this was caused by the people in charge only thinking about evaluating after the years has gone. Now they need to justify the money they spent on a new project, so lets write an evaluation.

Cycling jerseys I designed on less than famous models

I wonder if there is a divide among evaluators over qualitative vs. quantitative methods?  I thought about this when I was reading over the qualitative methods pieces by Weiss.  There is a perceived (at least by me) divided by educational researches about their preferred method of data collection.  I wondered if this translated over to evaluators?  I would think that evaluators would used a mixed method most of the time, but that is probably only a reflection of what I would do.

On a side note:

I have come to correlate cycling training to evaluation.  I have all this great data I collect on heart-rate, calories burned, total accent, miles, etc.  Which tells some of the story.  My legs the next day can tell a completely different story, because outside factors such as sleep, food consumed, etc.  It got me thinking about what outside factors could effect an evaluation of a program, which an evaluator would never think to ask about.  What if the person running the program covering up problems at home, or a person in the program has a drinking problem?  Is this common?

Read Full Post »

EDTech 505 Entry# 3

I have learned the parts of an evaluation proposal.  I’m not sure that I correctly handled the evaluations methods section in the proposal.  I struggled to put into words the goals and methods.  I understand that the evaluation request had to be generic, but for me that made it hard to gasps real situations that I could design an evaluation around.  I also found myself wanting to solve a problem I saw with the DIP program.  Why not make it digital and save the print cost.  I then remember the lecture by Professor Perkins on problem analysis, and realized I was trying to solve a problem that I was not asked to solve and may not need to be solved.  I can see where it would be hard for an evaluator to not just jump in and say this is obvious, you should just do this.

I read through Posavac & Carey to try to get a better understanding of how to write a proposal, and found it to be helpful in understanding evaluation.  It was the same basic information as Boulmrtis & Dutwin, but is a slightly different point of view.  Posavac & Carey will be helpful to me as I start researching for my final project.

I’m looking forward to writing my rubrics for measuring success for my iPad program, and at the same time worried about time management of such a large project during a busy time of year.  It will be nice to have something I can use right away from this class this fall at school.

Read Full Post »