Hello everyone, it’s time to look back and check reports on all the work done in 2014. The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog. Have a happy New Year 2015, my friends!
Here’s an excerpt:
A San Francisco cable car holds 60 people. This blog was viewed about 2,300 times in 2014. If it were a cable car, it would take about 38 trips to carry that many people.
In the last two posts, Icommented on Educational Assessment from two perspectives; first, analysing the types of tests we can find, and second, reflecting on how the information obtained through these tests is used in media. In this third post on educational assessment I’ll be dealing with the so-called ‘released items’, that is, activities and questions included in these official tests anybody can access and explore. To narrow down the analysis, I’ve chosen the European Survey of Language Competences (ESLC). As I said in a previous post, this test assesses students’ language competences in two foreign or additional languages. In Spain, English and French are tested having students finishing compulsory secondary education as participants. I’m wondering: What do these questions look like? What type of structure do they follow? What thinking skills are they promoting? Are they fostering creative thinking?
To start with, I’ve accessed the document ‘released tasks’, available on the Spanish Ministry of Education website (you can access it here). This gave me the chance to have a gut reaction towards the structure and content of this test. It quickly reminded me of those old tests I had to take when I was studying English in the Language School. I was 10 years old, and I hated them :). Also, they have the same structure as other tests, such as the ones used in the Official School of Languages or in examination institutions, such as Cambridge or Trinity. In other words, the format is quite traditional. Regarding content, if you have a look at page 5 in the document, you’ll see that the topics covered are also quite common: family, hobbies, holidays, etc.
Most tasks are following a multiple-choice response format, and are asking the student to spot key information in written and oral tests. Let’s see an example which has intrigued me. It is on page 16. There you can read a short text on Leo, a cat which has been lost. The text is thought to be a notice written by his owner, but it doesn’t contain common features of this type of texts, namely, a photo of the lost animal, and short and descriptive sentences. The text is not natural at all, and it is really clear that some sentences have been added to have some more ‘stuff’ to ask about.
Trying to see the brigh side of things, knowing the structure of the activity may be of use if students are very scared thinking about the test. In this way, we can make them see that tasks are very similar to the ones most textbooks and official tests are using. Also, it is interesting for teachers to see how the activities are scaffolded differently according to students’ level (even if they are not the ideal model for scaffolding, it may serve teacher trainees to reflect on how these tasks could be improved to be used in class in a more significant way.
And here it goes my question, what’s the place of creativity in all of this? How are we measuring divergent thinking? How are we checking that students are using compensational strategies which will not answer the question as it is required for them to do but will make them successful at the end? Are tests directed to students which feel at ease with Multiple Intelligences other than Logical and Linguistic? Is this type of learner we are looking for? Are these tests reflecting a spoon-feeding methodology which was useful in the industrial revolution when everything was structured, organised and measured accordingly? Is there a chance to design tests which are reflecting students’ full learning potential instead of making them say/do/match/point to what it is considered ‘correct’? Discussion is open.
Image 1 taken from here.
Image 2 by 2nix taken from freedigitalphotos.net
In this post I will deal with the PISA 2012 report as analysed by different media. It is my purpose to show how one single report can be interpreted from different perspectives depending on the source of information we are using. Let’s start!
PISA 2012 and the UK. I googled these words and came across a BBC article describing PISA results. The main worry here is that the UK is falling behind other countries and it is no longer a member of top lists. The Chinese educational system seems to produce more fruitful results and it is seen as a powerful rival. As the information is focused on top or bottom countries, Spain does not even appear…
What about top PISA country, Finland? Its interpretation of the test is very similar to the one carried out by the UK. They also consider their scores have declined dramatically and are concerned about the reasons why this has happened. In fact, the article I read mentions that it is important to get all the educational community involved in improving school quality, and highlight the importance of motivating students and making schools learning-friendly. No doubt their vision of education as something that goes beyond the classroom walls and their awareness of children’s needs and emotions make them be at the very top, no matter what PISA says this time.
And what about Spain? The digital edition of the newspaper ABC analysed the report emphasising that Spain is scoring below average in Maths, reading comprehension and Science. However, later in the article they mention that, in comparison with PISA 2003, students are scoring similarly in Maths and have improved in the other two areas.
In the same line, El Pais presents us with a ranking where Spain is in the bottom part of the list. Concerning the reasons why we are failing, and they state that improving is almost impossible, they report on opinions given by stakeholders who mention teachers and schools, although they do not indicate how to improve these scores.
Finally, the newspaper EL Mundo argues that Spain’s low scores can be explained because of the lack of investment in education. PISA results are considered a failure and a shame for Spain (literal words translated). They are also concerned about the educational level young people reach, as many of them only hold basic qualifications.
What’s my view after this analysis? Quality in education is a culture, and I consider that Spain has much work to do to make people aware of their part in this. It is not a question of blaming teachers or official budgets, it is everybody’s responsibility. We won’t change this in a day, but I bet that if we start the movement working hand by hand with educational centres, teachers and teacher education undergraduates, quality will be an everyday must-have.
What do you think about this? How is PISA viewed in your country? You can also follow this discussion here: @preguntasPISA
Today I’ll be covering the topic of international educational assessment. The reason is that I’m completing a MOOC on this issue, and have considered this info can be of your interest, apart from being part of the assignment of the course :).
PISA is probably the best known educational assessment study. Its name is not honouring the famous place in Italy, but it is an acronym standing for Programme for International Student Assessment. Spain participates every three years in this comparative study. It is focused on measuring 15-year-old students’ competences in relation to reading comprehension, mathematics and Science. Next study will be carried out in 2015.
TIMSs stands for Trends in International Mathematics and Science Study. It measures learning progression in terms of Mathematics and Science. It is not only interested in learning outcomes, as it also gathers information about curriculum contents and implementations, methodologies used, resources and more.
PIRLS Progress in International Reading Literacy Study. It is intended to be used with children aged 9-years as this is a crucial stage in their reading development. Decoding is now fully automatised and students can enjoy reading. One interesting detail about the study is that it comprises both literary and informative tests.
From the myriad of tests left, I have chosen two I didn’t know which are focused on measuring students’ linguistic competence in a foreign language. The first one is the European Survey of Language Competences, which offers very useful information regarding the percentage of students reaching basic and independent user levels, according to the European Framework, in L2 and L3.
Finally, the ETLS is a computer-based survey on the teaching and learning of English as an additional language. It is not just focused on language competences but also in language use and attitudes, among other variables. It will be launched in 2017.
Once I’ve described the most relevant tests in my field, it is time to reflect on the use of these tools. Are they used for formative assessment? Do political stakeholders use correctly the information produced in the reports following these tests? Does the inclusion of these tests lead to a washback effect? If we are supporting a change in school practices, gearing towards more active and participative methodologies, shouldn’t we design assessment tools accordingly? Over to you…
My tweet for this module: Sabías que el estudio ESCL proporciona info sobre competencias en L2 y L3 en Europa? #preguntasPISA
I’m the type of teacher who loves using envelopes, slips of paper and boxes. I’m always carrying materials like these to my classes, as I believe that they can be used for many purposes. Last year I attended a session about ‘learning boxes‘ in a Training Day for teachers and since then I was thinking about how to integrate this idea together into my lessons. At that time, my colleague prof. Juanjo Rabanal popped in my office to ask for my opinion about types of activities which could be carried out in the ‘Jornadas de Educación’ (Education Workshops), which are held once a year at the university where I work, and a light turned on in my mind. Why not turning ‘learning boxes into ‘interactive boxes’?
Interactive boxes are plastic or paper containers in which we put a task or activity. Students need to be able to carry it out without any lecturers’ help, and the task should ideally involve cooperative learning. Boxes need to include clear instructions about what students need to do to produce a learning outcome. Another characteristic is that time is restricted (that makes students go to the point and collaborate!). Instructions can be included as a paper worksheet, a digital file in a pen drive, a link to a video you have previously recorded, etc. Once students read the instructions and negotiate its meaning in their group, they start working to complete the task. In the box you may include objects they need to use in the activity. In this case, as we were working on interactivity, we used ICTs together with other objects, for example, a mobile phone, a tablet, a laptop, a measuring tool, etc. To make things more challenging, prof. Eva Peñafiel contributed to the project with an added difficulty using geocaching. In that way, students were given the coordinates of the location where the boxes were hidden (this was the first step to start working). We were really happy with the results of this experience, and it looked like students were finding it quite innovative.
My second try with interactive boxes was in a Postgraduate on-campus session.Lecturers from different departments created cross curricular activities which revolved around the topic ‘Scientific Expedition Trips‘. Students were divided into groups and given an explorer’s pack (introduction to the activity, a map with the places where the boxes where located, and a tablet). Once they read the instructions to the activity, they had a limited time to complete the task. For example, I designed an interactive box with fragments of real texts taken from diaries written by explorers. Students had to create a comic strip using photographs and an app to create Comics. Students created freeze frames and took pictures. When we finished the activity, teachers and students had a round-table discussion together. This was guided by the analysis of the activities carried out and the discussion about their design, usefulness,etc. One of the best things was to discover that many students were thinking about adapting this activity to their teaching contexts.
From a linguistic point of view, cooperative learning sets the best context to make participants use expressions related to negotiating meaning, giving opinion, praising others’ ideas, or encouraging peers to participate, among others. Also, it is advisable to encourage participants to reflect about what they have thought and felt while involved in the experience, and to encourage them to come up with more ideas to create new boxes they may use in their lessons.
Are you thinking about using this resource? Have you already used it? Was it successful? Would you like to have more ideas to develop it? I’d love to hear from you.
On 29th May, our University will be hosting the 1st Meeting of Educational Teams. The central idea is that we are living in a society which moves very fast and requires of educational centres to change and adapt constantly. Therefore, Head Departments of Educational Centres need to be flexible and creative to cope with these circumstances, looking at obstacles as if they were opportunities. In this context, this meeting aims to provide participats with the opportunity to think and reflect about the ways we can take advantage of educational change to keep on advancing in Education.
The Meeting is open to Head Departments and Teachers working at any educational level, and it will be held using Spanish as the vehicular language. It will run from 9 to 14.30.
Registration is free, but you need to book a place by sending an email to email@example.com with the subject: I Encuentro de Equipos Educativos de Organizaciones que Aprenden. In this email you need to indicate your complete name and Identity Card Number.
If you need more information, you can access it here
1st Meeting of Educational Teams at Cardenal Cisneros University College. Photo: Communication Service CUCC
When I first heard of rubrics, I was thrilled to bits. Assessing students’ work is not my cup of tea, and I always found it difficult to provide students with enough feedback to justify my mark and to guide them to improve their work. Working with rubrics looked the perfect solution for this situation, and I started to use them with enthusiasm. I was a Practicum tutor, and I thought it was a good idea to use rubrics. I defined 30 items as assessment criteria and checked Practicum reports following it. It was hard work at the beginning, but once I knew what I was looking for in my students’ work, it became sort of automatic to mark their works. Once this assessment process was finished, I found out that most of my students didn’t come to my office to collect their works, hence, they didn’t have access to the wonderful rubrics I had written for them. Naturally, I thought that my assessment was better because I could justify the mark with detailed arguments, but I did not feel I was fulfilling the ‘formative feedback‘ I wanted to provide.
Once the new degrees (Bologna Plan) were implemented, it sounded natural to me to incorporate rubrics to my everyday practice. Every time I assigned a task, I included clear information about what I expected students to do (or be). I started using checklists, then went through assessment criteria classified into 5 levels according to the level of excellence, and finally I used level descriptors, explaining which outcomes belong to each mark for each assessment criteria. It took me a long time to find useful rubrics, and most of the time I had to adapt them or make them new from scratch. Even showing students rubrics in advance was not so effective as I thought, and many of them confessed they did not look at them when completing assignments.
My last stage in the use of rubrics has been somewhat ‘painful’. From the results I obtained using rubrics in different tasks, I realised that many students were obtaining passing grades, even when their work was not worth that mark. In other words, I considered that if I hadn’t used rubrics, students wouldn’t have had a pass. I discussed this with my colleagues. Some of them had stopped using rubrics, because they considered it was a waste of time (students don’t look at them, it takes longer to mark works and marks tend to be higher than they should; that’s what they told me); some others were adapting them to make them ‘fairer’ (or tougher). I then looked at my rubrics and discovered that I was giving 1 point out of 5 for an incorrect structure, or poor English. The trick was then to do some Maths and check whether works with poor learning outcomes could have a pass with that rubric, and Eureka!, I found out that many rubrics were making things too easy for my student. From a scale from 0 to 5, if you don’t have a 2,5, it is of no use to mark that work. You’re giving points for something which blatantly doesn’t meet the minimum requirements.
Another issue is the one concerning the integration of content and language in rubrics. Most rubrics contain a section where the teacher can indicate if the students has a good level of English or shows good use of terminology. This ‘saves the day’ for most of us, but it is not a CLIL rubric. If we are integrating language in an appropriate way, we should be developing students’ language functions concerning the task in hand. For example, if we are working with experiments, and helping students to hypothesize, students should master hypothesizing structures such as: “It may…” “It’ll probably”, “It is possible that…”. A specific item in the assessment criteria should make reference to this type of language, and the learning outcomes concerning this. It is not just a matter of assessing, including this assessment criteria in the rubric means that we have integrated the learning of this language function in our lessons, and that students have had the opportunity to practise this language function inside (and/or outside) the classroom.
To sum up, my experience with rubrics is that of ‘constant learning‘, and I am now aware of many issues I have to take into account when designing a rubric (which I was not aware of before). All in all, I believe that rubrics are a great tool to assess students’ learning in a more appropriate way, and providing them with the feedback they need to do better next time.
What’s your experience with rubrics? Are you using them?