C.V.P. pedagogics &
education. ISSN: 2241-4665
Σύντομη βιογραφία του συγγραφέα Anon N’Guessan |
Κριτικές του άρθρου |
Athens 30 August 2014
Assessment in Côte d’Ivoire Public Universities: An Analysis
of Less Senior Teachers’ Practices.
By
Anon N’Guessan,
lecturer at
IREEP/SHS/ Université
Félix Houphouët Boigny
Abstract:
Evaluation is a very important step in the teaching
learning process. Nevertheless it is often implemented with very little care in
higher education, especially in our universities. In Côte d’Ivoire, as in
most African countries, educational activities are largely performed by
lecturers and senior lecturers for lack of very skilful
rank teachers and excess of students (Université Félix Houphouët Boigny
has 61,000 students and 1638 teacher-researchers) .
The aim of this study is to identify the practices of
students’ assessment in public universities of Côte d'Ivoire. To achieve
this goal we put out the following research hypothesis: evaluation practices in
public universities of Côte d'Ivoire depend on teachers’ views on
assessment. To test this hypothesis we collected data from a convenience sample
comprising teacher-researchers of Université Félix
Houphouët Boigny and Université Alassane
Ouattara of Bouaké.
Our research revealed a misunderstanding of the
formative and summative functions of assessment from the majority of our
respondents. They revealed a lack of pedagogic units within the UFR as well as
a low variation in assessment methods that are mainly oriented toward written
examinations at the end of every academic year. Thus, it appears that teachers’
understanding of assessment affects their assessment practices. Explicitly, we
note that teachers who attach great importance to summative assessment would
favor year-end written examinations as students ’assessment mean.
Résumé
L’évaluation est une étape très importante
dans le processus d’enseignement apprentissage. Mais elle est souvent mise en
œuvre avec très peu de précaution dans l’enseignement
supérieur et particulièrement dans nos universités. En
côte d’Ivoire comme dans la plupart des pays d’Afrique les
activités pédagogiques sont assurées en grande partie par
les Assistants et Maîtres-Assistants à cause du déficit
d’enseignants de rang magistral et du nombre pléthorique
d’étudiants (l’Université Félix Houphouët Boigny d’Abidjan
compte 61000 étudiants pour 1638 enseignants chercheurs).
L’objectif de cette étude est donc de connaitre les pratiques
d’évaluation des acquis des étudiants dans les universités
publiques de Côtes d’Ivoire. Pour atteindre cet objectif nous avons
émis l’hypothèse de recherche suivante : les pratiques d’évaluation des
apprentissages dans les universités publiques de Côte d’Ivoire
dépendent des conceptions évaluatives des enseignants. Pour
vérifier cette hypothèse nous avons collecté des
données auprès d’un échantillon de convenance
composé d’enseignants-chercheurs de l’université Félix
Houphouët Boigny d’Abidjan et de
l’Université Alassane Ouattara de Bouaké.
Les résultats de la recherche ont mis en
évidence une méconnaissance des fonctions formative et sommative
de l’évaluation des apprentissages de la plupart des
enquêtés. Ils ont révélé une absence de
cellules pédagogiques au sein des UFR et une faible variation des modes
d’évaluation orientée essentiellement vers une utilisation des
examens écrits en fin d’année.
Ainsi, il en ressort que la conception évaluative des enseignants
influe sur leur pratique d’évaluation des apprentissages. Plus
précisément, l’on note que les enseignants qui accordent une
grande importance à l’évaluation sommative utilisent beaucoup
plus l’examen écrit en fin d’année comme mode d’évaluation
des acquis des étudiants.
.
1. Introduction and Problem
Assessment is an important step in the teaching-
learning process. It is a condition to students’ learning in a sense that it
has an influence on their motivation as learners. Indeed, a good assessment is
a guarantee of success even for non-gifted students in a sense that it promotes the establishment of control
loops in the teaching-learning process, thus stimulating students’
metacognitive activities such as self-regulation,
self -assessment, decentering ...
Assessment functions have improved with time. Indeed,
in the early 60s it essentially aimed at ensuring that learner’s productions
were in compliance with the teacher’s demanding. Over time, another form of
assessment whose purpose was to ensure that the training facilities provided by
the system were adapted to the individual characteristics of students had been
developed. Thus, a prognostic function in access to a training program and
summative at the end of education, it also assumes a formative role in
providing information that enables an adaptation of teaching to learners’
individual differences. Assessment does
not just amount to rating students which focuses mainly on performance criteria
of the learner and / or product success. It goes beyond because it regulates
the teaching and learning activities. It also involves approaches to learning
and / or performing products. Unfortunately, current practices in universities
include weaknesses that can have a negative impact on their operation. Indeed,
these practices are marked by their lack of uniformity even within the same
forming structure. This is manifested in the implementation modalities of
learning assessment of students and rubrics developed by teachers for students'
work as well die and the same level of training. Most teachers are often lonely
when designing learning assessment tools and the conceived instruments are not
submitted to any control by any knowledgeable person who could ensure a minimum
quality level. As pointed out by Marc
Romainville (2002) «L’hétérogénéité
est sans doute leur premier trait dominant: on observe en effet une absence de
standardisation des dispositifs, des procédures, des exigences et des
critères sur la base desquels les acquis des étudiants sont
appréciés».Since measurement is an important component of the
assessment process, the instruments should be calibrated to avoid large
fluctuations between the results. Besides, measurement
refers to the notion of reference units thus calibration the used instruments.
For Osman (accessed 2013), it is difficult to achieve valid measurements in
education. Besides, this problem is compounded by the lack of well-validated
instruments. This lack of well-validated instruments is a source of suspicions
in respect of assessment. These suspicions are due to the high heterogeneity of
assessment methods that can partially be explained by the degree of autonomy of
University teachers as noted by Marc
Romainville (2002) « l’enseignant-chercheur,
puisqu’il dispose d’une importante marge d’autonomie, organise
l’évaluation des acquis de ses étudiants selon son profil
personnel d’évaluateur. »
Nonetheless the expertise of higher education teachers
in assessment is not established and it remains a source of questions that have
not yet been answered as pointed out by Pellen Marie
(2010) « La
question de l’expertise des évaluateurs est régulièrement
invoquée et alimente les débats autour de la légitimité des évaluations
et des pratiques de certains
évaluateurs ». As we can see,
assessment is surely far more complex than a mere measurement. Etymologically, the term assessment means “determine the value of
something." For HADJI C. (1990) assessment is a particular reading of
reality. Thus, appraising efficiently requires skills and competence. It also
demands "a set of behaviors based on the mobilization and efficient use of
a set of resources," Indeed, skill development is learning and therefore
training even missiles to mobilize resources results are not necessarily the
product of an education or training. As Bernier (1999) said, training, both
initial and continuous, is an essential tool for developing the skills of the
workforce, and thereby the competitiveness of organizations. The following are his exact terms” «La formation, qu’elle soit initiale ou
continue, représente alors un outil essentiel au développement
des compétences de la main d’œuvre et, par le fait même, de
la compétitivité des organisations.». Assessment
requires skills, so it involves professionalism on the part of key actors such as teachers, because it always results in a
decision whose consequences go beyond schools and universities. This is the
reason why quality assessments are required. However as pointed out by Mark Romainville (2012) this significant heterogeneity of
practices is detrimental to the fidelity and validity of assessment. « Cette importante
hétérogénéité des pratiques nuit à la
fidélité et à la validité de l’évaluation. »
That is why OSMAN (accessed 2013) advises that every educational decision
should be made as logically and objectively as possible. « chaque décision pédagogique devrait
être prise logiquement et objectivement, et devrait se baser sur les
données les plus pertinentes, et ce, dans la mesure du possible ». To summarize, instructional decisions are not as easy to make as it is
observed in practice. In this sense, BOUVIER A. (1998) assumes that for an
external observer of teachers’ actions, the difficulty in tracking their
decisions lies in the abundance and the suddenness of the said actions.».
University teachers of Côte d' Ivoire, as in
most universities in the world, are recruited after their doctoral thesis
defense. No particular pedagogic training is requested from them and this has
surely some consequences on the teaching quality and by extension on students’
assessments. Due to the insufficiency of professors and the great number of
students, most education activities are performed by less senior lecturers whose responsibilities are the following:
- Ensure the teaching task in the form of lectures, and/or tutorials or
practical work,
-Conceive a new course or improve the existing one
Grading exam copies;
-
Taking part in exam deliberations
-
Taking Part in pedagogic meetings and activities
-Coaching students
For having received no teacher training, we are deemed to question the
professional quality of these young teachers in general and particularly in
students’ assessments. Thus, this research will attempt to provide answers to
the following questions:
• What is less senior teachers’ perception of assessment?
• What are their assessment procedures?
• How do these teachers design their students’ assessment instruments?
• Are students’ assessments valid, fair, and faithful in our public
universities characterized by a lack of teaching resources and teacher
training?
The above questions lead to the objectives of
the present research.
General Objective:
The general objective of this study is to identify assessment
practices in use in Cote d’Ivoire public universities and their impact on
students’ learning. This general objective generates the following specific
objectives:
Specific Objectives:
More specifically , this study was conducted to :
- determine Côte d' Ivoire university teachers’
social representation of assessment
- Describe the evaluation procedures implemented by Côte d' Ivoire university teachers
- Describe how teachers design assessment tools
- Analyze assessment procedures implemented by
teachers and their possible impact on the teaching- learning process.
In order to provide answers to the above questions we will try to verify
the following assumptions:
Hypotheses
- Assessment practices in use in Cote d’Ivoire public universities are related
to teachers’ social representation of this important activity.
We deduce the following null hypothesis H0:
- Assessment practices in use in Cote d’Ivoire public universities have no
connection with teachers’ social representation of assessment.
Thus, the alternative hypothesis H1:
- Assessment practices in use in Cote d’Ivoire public universities depend on
teachers’ social representation of assessment
·
The above assumptions highlight the following study
variables :
- Dependent variable: " assessment practices in public universities "
- Independent Variable: " evaluative conceptions of teachers '
Evaluative conceptions of teachers are reflected in this study by the position
of teachers in relation to formative and summative functions of assessment in
the teaching -learning process. They are related to the results of the analysis
of teachers' responses to the items related to the control function and the
function of assessment certification. They have a link with the answers to the
following questions:
·
Does, yes or no, the teacher give
an importance to formative assessment?
·
Does, yes or no, the teacher give
an importance to summative assessment?
Thus, these questions
generate the following minor hypotheses:
Assessment practices of teachers are not related to the position of the teacher
in relation to formative assessment
Assessment practices of teachers are not related to
the position of the teacher in relation to summative assessment.
To check these hypotheses, we collected data through a questionnaire issued for
teachers of Côte d' Ivoire public universities.
1.1 Methodology
As we mentioned, this study is to verify the
null hypothesis H0. To do this we will determine the population, present the
appropriate sampling method and data collection instruments.
1.2
Research Population
Our research population consists of all less senior
public universities teachers of Côte d' Ivoire.
Nevertheless, faced to difficulties in accessing all of them, we decided to
base this research on some teachers of Universite
Félix Houphouët Boigny
and Universite Alassane Ouattara of Bouaké.
a) Method of Sampling
For a better representation of the research
population, we wished to conduct a random sampling, but given the difficulties of
implementing such a type of sampling (lack of basic updated survey of teachers,
unavailability of some of them for study tours and various works) we opted for
an empirical sampling. With difficulties to convince teachers to participate in
this survey we chose a convenience sample. This raises the issue of the
representativeness of the sample study from the mother population.
b) Sample
Our sample was made of 110 teachers among which 85
from Universite Félix Houphouët
Boigny and 25 from Universite
Alassane Ouattara of Bouaké. We recall that this is a sample available.
c) Data Collection Instruments
For our data collection, we used a structured survey
questionnaire into four parts: the first part covers the main reasons why the
assessment is used in the second part there are issues related to evaluation
procedures, the third part is devoted to different phases of design assessment
instruments and finally the last part includes items relating to information
relating to the respondents. Most questions are closed. There are however
open-ended questions to allow respondents to explain some answers or speak
freely about some issues.
e) Administration of the
Questionnaire
The main difficulty we faced was teachers’ lack of will in participating in
this survey. Indeed the vast majority of contacted teachers subtly got rid of
this task. Thus, we were forced to take advantage of teacher training seminars
for teachers to distribute the questionnaires to participants who are mostly
assistants and lecturers having at most two and 6 years of teaching practice.
With this mode of administration, we have achieved a return rate of 89%.
The results of the research are mainly the teachers'
knowledge about the functions of assessment, evaluation methods and building
assessment tools. These results also apply to distributions of respondents
according to the «kind» and "the University of origin."
It should be noted that 110 questionnaires we validated 98. Which corresponds
to a satisfactory rate of return (98/110 = 0.89 or 89 %)
2.1
Variables’
Descriptive Analysis by "Gender" and "University of Origin"
Type of respondent
Table 1: Distribution
According to Gender
|
Number |
Percentage |
Valid Percentage |
Male |
73 |
74,5 |
83,9 |
Female |
14 |
14,3 |
16,1 |
Total |
87 |
88,8 |
100,0 |
No response |
11 |
11,2 |
|
Total |
98 |
100,0 |
|
According to Table 1, the vast majority (74.5 %) of
respondents consists of male teachers. One can also observe a minority (14, 3 %) of female teachers. However, there is 11.2 % of no
response to this question. The strong dominance of men in the sample population
reflects the reality of the staff and the explanation can be rooted in the low
education rate of girls in primary school.
Table 2:
Distribution of Respondents According To the University of Origin..
|
Number |
Percentage |
ENS; Université
FHB |
79 |
80,6 |
Université Alassane Ouattara |
19 |
19,4 |
Total |
98 |
100,0 |
Table 2 shows that the vast majority of our respondents
consist of teachers of Université Félix Houphouët Boigny
80.6%. Teachers from Université Alassane Ouattara are the minorities.
2.2 Assessment in the Learning Process
In order to
acquire information on the state of teachers’ knowledge about assessment, the
proposed items are: "Getting to know students," " Get an index
of education quality "," Motivating students "," Assign
marks to learners , "" Meet administrative requirements ","
facilitate the grouping of students according to ability "," Adapting
education to learners’ needs and expectations” , “identify the best students
" " Involve more students in their learning".
Respondents were asked to indicate which of the above variables constitute the
three main reasons for assessment in the teaching-learning process.
We then classified these items into two subsets according to whether they
exemplify a formative or summative function. As observed by Renée Forgette -Giroux , Marielle Simon , Micheline Bercier - Larivière
(1996), assessment generally involves
making pedagogic (formative assessment) and administrative ( summative
assessment ) decisions. The following are their exact words:
« En général,
l’évaluation des apprentissages implique des décisions d’ordre
pédagogique (évaluation formative) et d’ordre
administratif (évaluation sommative) ». Pedagogic decisions are rather
oriented towards the diagnostic functions of assessment, (regulation,
information or grouping abilities (Cardinet , 1992; Hargreaves and Earl , 1990; Wilson, 1989). As for administrative decisions, they aim at
the satisfaction the certification functions of assessment (of studies , selection and communication with parents ( Dorr- Bremme 1983; Hoge and Coladarci , 1989; Stiggins , Frisbie , and Griswold, 1989).
2.3 Formative Assessment Function or Teaching-Learning
Regulation Function
Table 3 "Getting to Know Students”
Is this a Reason for the Use of
Assessment? |
Number |
Percentage |
Is not a Reason for its use |
72 |
73,5 |
Is the Main Reason |
8 |
8,2 |
Is the Second Reason |
7 |
7,1 |
Is the Third Reason |
11 |
11,2 |
Total |
98 |
100,0 |
Knowing students better will allow the teacher to suggest activities that can
help students in difficulties, and more complex activities to the efficient students.
This is the role of formative assessment. According to Table 1, the vast
majority (73.5 %) of surveyed teachers asserted that assessment is not made to
understand students better. Thus, only 26.5% think that understanding students
better is one of the reasons for assessment in the teaching-learning process.
• Table 4 «Motivating Students»
Is this a Reason for the Use of
Assessment? |
Number |
Percentage |
is
the main reason |
68 |
69,4 |
is
the main reason |
4 |
4,1 |
e is the second reason st la deuxième raison |
15 |
15,3 |
is
the third reason |
11 |
11,2 |
Total |
98 |
100,0 |
Motivation is required to initiate and sustain any project. For Pantanella (1992 ), motivation is
" an energy that makes us run ." The same way, Auger and Bouchelart (1995) view motivation as what creates the conditions that lead to action,
what stimulates and gives movement. Learning being an activity that
requires a commitment, it is difficult to keep his one’s commitment without motivation.
Indeed, a motivated student perseveres despites
difficulties. He shows all his interests in the proposed classroom activities.
He participates actively. For Perrenoud (1998) the
purpose of assessment is to identify
sufficiently the achievements and ways of thinking of each student to help him
progress. In other words, assessment
has a motivation role. However the results in Table 2 show that a good majority
(69.4 %) of surveyed teachers alleged that assessment does not aim at
motivating learners. Only 4.1% of them assumed that motivation is the main
reason for which assessment is used. For 15.3 % of our respondents, motivation
is the second reason for the use of assessment is used in the teaching-learning
process. To end with this point, 11.2 % of our respondents find motivation as
the third reason for which assessment is used. In one word, only a minority of
teachers think that we assess in order to motivate the learner.
• Table 5 "Involve
more Students in their Learning "
Is this a Reason for the Use of
Assessment? |
Number |
Percentage |
Valid Percentage |
is
not a reason |
79 |
80,6 |
81,4 |
is
the main reason |
5 |
5,1 |
5,2 |
Is
the second reason |
7 |
7,1 |
7,2 |
is
the third reason |
6 |
6,1 |
6,2 |
Total |
97 |
99,0 |
100,0 |
Missing |
1 |
1,0 |
|
Total |
98 |
100,0 |
|
Among the factors that influence students’ motivation, those related to the
class constitute for a teacher the “gateway” for intervention with students who
are experiencing learning difficulties. This consists in developing students’
skills in peer and self-assessment, and helping them develop appropriate
strategies for learning to learn. For Charles Hadji
(2012) the best way to perform one’s task
as a teacher is to enhance students' ability to self-regulation. This
implies the need to put assessment at the core of learning. The results in
table 3 indicate that for the vast majority (80.6%) of surveyed teachers, the
role of assessment is not to involve students better in their own learning.
Only (5.1%) of them believe that this is the main reason for which assessment
is used. For (7.1 %) of our respondents, involving students better in their own
learning is the second reason why assessment is used while for ( 6.1%) of them, it is the third reason.
•Table 6 "Adapting one’s
Teachings to Learners’ Needs and Expectations"
Is this a Reason for the Use of Assessment? |
Effectifs |
Pourcentage |
is not a reason |
53 |
54,1 |
is the main reason |
9 |
9,2 |
Is the second reason |
22 |
22,4 |
is the third reason |
14 |
14,3 |
Total |
98 |
100,0 |
According to Lawrence Talbot (2009), formative
assessment is a set of procedures, more or less formalized by the teacher, with
the aims to adapt his teaching on the learners’ progress or difficulties. Formative
assessment is an instrument of learners’ differentiation. Table 4 reveals that
for the majority of our surveyed teachers (54.1%), assessment does not consist in adapting their
teachings to learners ’needs and expectations. Dissimilarly, a minority (9.2%)
of the same respondents declares to assess primarily to meet learners’ needs
and expectations. Respectively, 22.4 % and 14.3 % asserted that adapting one’s
teachings to learners’ needs and expectations constitutes the second and third
reason why teachers use assessment.
Table 7 Assessment as a Clue of
Teaching Quality
Is this a Reason for the Use of Assessment? |
Effectifs |
Pourcentage |
is not a reason |
27 |
27,6 |
is the main reason |
45 |
45,9 |
Is the second reason |
18 |
18,4 |
is the third reason |
8 |
8,2 |
Total |
98 |
100,0 |
Assessment plays an important role in improving the quality of teaching decisions.
Indeed, this activity enlightens teachers on the degree of mastery of a concept
by their learners. , it allows them to plan and guide instruction while
providing useful feedback to students. Thus, students’ learning results is an
indication of the quality of the instruction given by a teacher. This position
is shared by 45.9 % of our surveyed teachers who stated that using assessment
as a clue of teaching quality is the main reason for which assessment is used.
Likewise, respectively 18.4 % and 8.2 % of our respondents assert that using
assessment as a clue of teaching quality constitute the second and third reason
why assessment is used in the teaching- learning process. Contrary to this
position, a significant proportion (27.6%) of surveyed teachers stated that
"using assessment as a clue of teaching quality" is not a reason for
which assessment is used.
2.4 The Summative
Function of Assessment
•Table 7 "Assign marks to
learners"
Is this a Reason for the Use of Assessment? |
Effectifs |
Pourcentage |
is not
a reason |
75 |
76,5 |
is the main reason |
8 |
8,2 |
Is the second reason |
7 |
7,1 |
is the third reason |
8 |
8,2 |
Total |
98 |
100,0 |
Summative assessment is characterized by the habit to assign grades to students
at the end of every teaching session. This is what is in use in most
universities in the world. Thus, Côte d' Ivoire
university teachers are not an exception to this practice inherited from
traditional pedagogy. Surprisingly, the great majority of our respondents (76.5
%) asserted that awarding marks is not a reason for the use of assessment in
the teaching- learning process. Respectively, only 8.2%, 7.1% and 8.2 % of them
proclaimed that the grading of students is respectively the first, second and
third reason why assessment is used.
•Table 8
«Facilitate Grouping According to Ability and Identify the Best Students»
|
Facilitate grouping according to abilities |
Identify the best students |
||
Is this a Reason for the Use of Assessment? |
Number |
Percentage |
Number |
Percentage |
is not a reason |
94 |
95,9 |
74 |
75,5 |
is the main reason |
|
|
8 |
8,2 |
Is the second reason |
2 |
2,0 |
5 |
5,1 |
is the third reason |
2 |
2,0 |
10 |
10,2 |
Total |
98 |
100,0 |
97 |
99,0 |
Missing |
0 |
0 |
1 |
1,0 |
Total |
98 |
100 |
98 |
100,0 |
Normative assessment, closely related to summative assessment, aims to classify
or select, guide and identify a group of students going through learning difficulties
in order to help them individually or identify very good students for an
additional training in order to give them a new coordination. Consequently,
this assessment facilitates learners’ grouping according to abilities while
enabling to identify the best students. According to Table 8, almost all (95.9
%) of our respondents alleged that they are not using assessment to facilitate
grouping according to abilities when 75.5% of them assumed that it is not made to identify the best students either. Thus, only 24.5
% of the surveyed teachers acknowledged that "identifying the best
students" is a reason for which assessment is
made.
•Table 9 Meet Administrative
Requirements
Is this a Reason for the Use of Assessment? |
Number |
Percentage |
is not a reason |
76 |
77,6 |
is the main reason |
1 |
1,0 |
Is the second reason |
1 |
1,0 |
is the third reason |
20 |
20,4 |
Total |
98 |
100,0 |
Summative assessment leads to administrative decisions because education institutions
rely on ratings provided by teachers from summative assessments to select,
classify learners, and undertake reforms to meet modern demands. Thus, beyond
pedagogic requirements, assessment is usually organized to meet administrative
necessities. However the great majority (77.6 %) of the surveyed teachers
indicated that assessment is not organized to meet administrative requirements.
Only a weak proportion of the surveyed teachers (1%) identified that “meeting
administrative requirements” is the main reason why summative assessment is
used in the teaching-learning process. This weak proportion
of our respondents are followed by 20% of them for whom meeting
administrative requirements is the third reason for the use of assessment.
The results of the present surveys reveal serious gaps
deficiencies in teachers’ understanding of the
assessment functions. Indeed, the majority of the surveyed teachers
assumed that none of the following statements: "understand students
better", "Adapting education to learners’ needs and
expectations", "get a hint of the teaching quality", "
involve students in their own learning" "assign grades","
Facilitate grouping according to abilities and identify the best students"
constitute reasons for which assessment is used in the teaching-learning
process"
2.5
Descriptive
Analysis of Assessment Modes
Table 10: the Use of Quizzes during the Academic Year
|
Number |
Percentage |
Valid Percentage |
Cumulative Percentage |
Never |
49 |
50,0 |
50,0 |
50,0 |
Rarely |
43 |
43,9 |
43,9 |
93,9 |
Often |
6 |
6,1 |
6,1 |
100,0 |
Total |
98 |
100,0 |
100,0 |
|
The above table shows that almost all (93.9%) of our surveyed teachers among
which50% confessed to never make use of quizzes, 43% asserted to do it rarely)
of our respondents rarely or never use quizzes. Only an insignificant minority
(6.1%) asserted to often use this mode of assessment during the
teaching-learning process.
Table 11: Lab Report or Practical
Work Observation
|
Number |
Percentage |
Cumulative Percentage |
Never |
51 |
52,0 |
52,0 |
Rarely |
17 |
17,3 |
69,4 |
Often |
18 |
18,4 |
87,8 |
Always |
12 |
12,2 |
100,0 |
Total |
98 |
100,0 |
|
Lab reports are not sufficiently used by surveyed teachers
as a mean of learners’ assessment. Indeed, only 30 % of surveyed teachers
asserted to often or always evaluate from laboratory reports. This can be
explained by the fact that very few teachers among the surveyed are from the
UFR of Science where lab reports are essential in students’ training.
Table 12: Research
|
Number |
Percentage |
Valid Percentage |
Cumulative Percentage |
Never |
2 |
2,0 |
2,1 |
2,1 |
Rarely |
7 |
7,1 |
7,2 |
9,3 |
Often |
67 |
68,4 |
69,1 |
78,4 |
Always |
21 |
21,4 |
21,6 |
100,0 |
Total |
97 |
99,0 |
100,0 |
|
No response |
1 |
1,0 |
|
|
Total |
98 |
100,0 |
|
|
A great number of teachers used research work to
assess their students. Indeed, 88.6% of our respondents reported to always or often
use research works to assess their students. We are satisfied with these
figures that show the importance awarded to research by most university
teachers. Maybe the time has come to remind that research and training are the
two main functions of our universities.
Table 13: Classroom Assessment
(exercise, problem, abstract)
|
Number |
Percentage |
Valid Percentage |
Cumulative Percentage |
Never |
4 |
4,1 |
4,1 |
4,1 |
Rarely |
44 |
44,9 |
45,4 |
49,5 |
Often |
49 |
50,0 |
50,5 |
100,0 |
Always |
97 |
99,0 |
100,0 |
|
No response |
1 |
1,0 |
|
|
Total |
98 |
100,0 |
|
|
Classroom assessment in the form of exercises, problem
solving and reading abstracts are widely used by teachers in assessing students.
Indeed, 94.9 % of our respondents acknowledge to often or always use this assessment mode. However, few teachers (4.1%) say
they rarely use this assessment mode.
Table 14: Observation of
Classroom Activities
|
Number |
Percentage |
Valid Percentage |
Cumulative Percentage |
Never |
13 |
13,3 |
13,5 |
13,5 |
Rarely |
26 |
26,5 |
27,1 |
40,6 |
Often |
40 |
40,8 |
41,7 |
82,3 |
Always |
17 |
17,3 |
17,7 |
100,0 |
Total |
96 |
98,0 |
100,0 |
|
No response |
2 |
2,0 |
|
|
Total |
98 |
100,0 |
|
|
The observation of students’ classroom activities is regularly used by the
majority (58.1%) of the surveyed teachers. However a significant proportion
(39.8 %) of our respondents asserted the contrary.
Table 15: the Portfolio
|
Number |
Percentage |
Valid Percentage |
Cumulative Percentage |
Never |
76 |
77,6 |
98,7 |
98,7 |
Rarely |
1 |
1,0 |
1,3 |
100,0 |
Often |
77 |
78,6 |
100,0 |
|
No response |
21 |
21,4 |
|
|
Total |
98 |
100,0 |
|
|
The portfolio is almost entirely absent in students’
assessment. Indeed, 78.6% of our surveyed teachers asserted to have never used
it. There is also a significant proportion (21.4%) of non-response regarding
this assessment mode. This could point out teachers’ complete ignorance for
this assessment mode.
Table 16: Written Exam
|
Number |
Percentage |
Valid Percentage |
Cumulative Percentage |
Never |
11 |
11,2 |
11,7 |
11,7 |
Rarely |
83 |
84,7 |
88,3 |
100,0 |
Often |
94 |
95,9 |
100,0 |
|
No response |
4 |
4,1 |
|
|
Total |
98 |
100,0 |
|
|
Written examinations at the end of the year is the most popular assessment
method used in our universities. Indeed, almost all (95.9 %) of our respondents
declared to use this assessment mode inherited from traditional pedagogy whose
first concern is students’ selection and the submission to administrative
requirements.
Table 17: Oral exam
|
Number |
Percentage |
Valid Percentage |
Cumulative Percentage |
Never |
20 |
20,4 |
24,4 |
24,4 |
Rarely |
22 |
22,4 |
26,8 |
51,2 |
Often |
22 |
22,4 |
26,8 |
78,0 |
Always |
18 |
18,4 |
22,0 |
100,0 |
Total |
82 |
83,7 |
100,0 |
|
No response |
16 |
16,3 |
|
|
Total |
98 |
100,0 |
|
|
Contrary to the written exam, the oral examination, the oral exam at the end of
the year is less used as an assessment method in our universities. Indeed, the
results of our descriptive analysis indicate that less than half (40.8%) of our
respondents regularly use this assessment method while 42.8% say the contrary.
We also observed 16.3% of non-response concerning this evaluation mode.
Table 18: Presentations by Students
|
Number |
Percentage |
Cumulative Percentage |
Never |
6 |
6,1 |
6,1 |
Rarely |
13 |
13,3 |
19,4 |
Often |
46 |
46,9 |
66,3 |
Always |
33 |
33,7 |
100,0 |
Total |
98 |
100,0 |
|
Oral presentations by students on a given topic decided in class also appears
as an assessment mean frequently used by (80.6 %) of the surveyed teachers.
Table 19: Academic
Projects
|
Number |
Percentage |
Cumulative Percentage |
Never |
33 |
33,7 |
33,7 |
Rarely |
46 |
46,9 |
80,6 |
Often |
18 |
18,4 |
99,0 |
Always |
1 |
1,0 |
100,0 |
Total |
98 |
100,0 |
|
We can observe that a great number (80.6 %) of our surveyed teachers do not use this
assessment mode. Only a small proportion
of them, (18, 4%) acknowledge to
use it.
The preceding pages have shown a variety of assessment
modes more or less used in our universities. However, as pointed out by Martine Ferguson (December
2005), « Pour mieux tenir compte de la
personnalité et de la diversité de nos élèves, il convient
de varier les modes d’évaluation en diversifiant les types d’exercices
qu’on leur propose tant à l’écrit qu’à l’oral,
individuellement ou en groupe. » To summarize we would write that to better reflect the personality and diversity of our students , it is appropriate to vary our assessment modes by
diversifying the types of exercises they are given, both in writing and orally,
individually or in groups. Our results have shown frequently used
assessment modes such as research work , written tests, reading abstracts, year-end written examinations etc. and seldom
assessment modes such as portfolio, written and oral quizzes etc.
For a better appraisal of the importance and the
weight given to student work in relation each assessment mode, we are pleased
to present the following statistics through the next two tables.
Table 20 : About the
Existence of a Pedagogic Cell in Every School
|
Number |
Percentage |
Valid Percentage |
Yes |
47 |
48,0 |
69,1 |
No |
21 |
21,4 |
30,9 |
Total |
68 |
69,4 |
100,0 |
No Response |
30 |
30,6 |
|
Total |
98 |
100,0 |
|
Table 21 : About the
Contribution of Pedagogic Cells in Assessment Instrument Issuance
|
Number |
Percentage |
Valid Percentage |
Yes |
18 |
18,4 |
41,9 |
No |
25 |
25,5 |
58,1 |
Total |
43 |
43,9 |
100,0 |
No Response |
55 |
56,1 |
|
Total |
98 |
100,0 |
|
2.6 Proportion of Each Assessment Mode in Students’
Assessment per Academic Year.
The results indicate the average weight given to the different
assessment modes for a full academic year. Thus, we got the following average
proportions: homework (3.01 %) , quizzes during the year (0.61 % ), lab reports
or observation ( 3.88% ) classroom written tests ( 6.12% ) , research ( 6.07%
), anecdotal test (0% ), the portfolio (0.05%) , year-end written examination (
70.51 %) , year-end oral exam ( 5.99% ), project (1.17 %) , class work (5.38
%). We can observe that the year-end written examination is the most common
assessment mode in use in most universities. This is distantly followed by
classroom written test. In contrast to these, anecdotal test and the portfolio
are absent from the records of teachers’ assessment practices.
Besides, the medians show that at least 50 % of our respondents
think that, the year-end written examination represents at least 70 % of
students’ final work, while the other assessment modes are not used by at least
50 % of our surveyed teachers. It also appears that the minimum weight of the
year-end written examination varies 50% to 100 %. This means that some teachers
use only this evaluation mode. It derives from the preceding the question of
whether these results are related to teachers’ assessment perception.
2.7 Checking
Our Research Hypotheses.
It must be reminded that dependent variables are qualitative and dichotomous for, they
contain two terms that are the answers to questions related to the position of
teachers in relation with the importance of formative assessment and the
importance of summative assessment in the teaching -learning process.
It derives from the above results that teachers’
assessment practices are essentially characterized by the weight given to
different assessment modes. In other words, assessment practices are
characterized by the year-end written exam.
The independent variable is therefore quantitative and
as each of these two variables related to teachers’ assessment conception is
qualitative dichotomous, the appropriate statistical test to verify this
hypothesis is the Student t test for independent samples.
Table 21: Group Statistic Table
|
importance of formative assessment |
N |
Average |
Deviation |
standard error average |
Year-end Written Exam Proportion |
Devote few importance to formative
assessment |
16 |
71,88 |
19,397 |
4,849 |
Devote Great importance to
formative Assessment |
81 |
68,77 |
19,438 |
2,160 |
According to the above table teachers who give little importance to formative assessment
and those who devote great importance to this type of assessment attribute
equal importance to the year-end written examination. However, one can notify
that the first ones devote a little more weight to the year-end written
examination than the second ones. Indeed, for the first group, this activity
represents an average of 71.88 % while for the latter it represents 68.77 % of
students’ yearly results. The Student's t calculated is 0.585 while the value
of t tabulated at 5 % to 95 degree of freedom is 1.9867. The calculated t is
less than this value; we cannot reject the hypothesis H0. In another word, the
position of the teacher regarding the importance or not of formative assessment
does not affect its assessment practices.
Table 22: Summative Evaluation Statistic Group
|
Summative Assessment
Representation |
N |
Average |
Deviation |
standard
error average |
Year-end Written Exam Proportion |
importance of formative assessment |
58 |
65,26 |
19,611 |
2,575 |
Devote few importance to formative assessment |
39 |
74,23 |
17,265 |
2,765 |
Unlike the previous table, the teachers who give little importance to summative
assessment attribute less weight (65.26 %) to year-end written exam, while those
who attach great importance to summative assessment attribute more weight
(74.23 %) to year-end written examination. The test of Student, with 95 degree
of freedom at 5 % certifies that the difference between the two proportions is
significant. Indeed, the calculated t is 2.316, while the value of t is
tabulated is 1.9867. Consequently, we note that the calculated t is greater
than the tabulated t . We can thus reject the null
hypothesis and accept the alternative hypothesis which states that assessment
practices are related to the position of the teacher in relation to summative
assessment. In other words, the more a teacher attaches an importance to
summative assessment, the more weight
he gives to year-end written examinations. These results indicate a high risk
of poor reliability in the assessments generally performed by assistants and
lecturers teaching in Côte d' Ivoire public
universities.
3. Conclusion
This study aims to provide some information about
Côte d' Ivoire public university less senior
teachers’ assessment practices. It has revealed that some teachers experience
some difficulties in the implementation of assessment activities which
constitute a crucial step in the teaching -learning process. Indeed, the
results of this study revealed serious weaknesses in the surveyed teachers’
knowledge of the (formative and summative) functions of assessment. Their daily
pedagogic practices reveal a weak variation in their assessment modes which are
primarily oriented towards year-end written exams that count for over 70% in
the calculation of students’ annual average of. The test of student showed a
significant correlation nearing 5 % and 95 degree of freedom between the primacy that less-senior teachers give to summative
assessment in their teaching and supervisory practices, and the important role
given to year-end written examination in students’ assessment.
4. References
PELLEN M. (2010) Évaluation et
enseignement supérieur, le calendrier des lettres et sciences humaines
A. REY, (Mars 2000) « Le Robert. Dictionnaire
historique de la langue française »,.
HADJI C.( 1990)
« L’évaluation, règles du jeu », Ed ESF,.
BERNIER, C. (1999). « Vers une formation continue de
la main-d’œuvre au Québec ? » Relations industrielles/, vol. 54, no
3, 489–502.
BOUVIER A. (1998) Faut-il, au sein des organisations,
substituer le pilotage à l’évaluation ?In L’évaluation institutionnelle de
l’éducation, Montréal, Éditions de
l’AFIDES, pp. 137-150.
PERRENOUD P. (1998). L’évaluation des élèves. De la fabrication de
l’excellence à la régulation des apprentissages. Entre deux
logiques. Bruxelles, Paris, De Boeck Université.
HADJI C. (2012), Comment impliquer l'élève dans
ses apprentissages, ESJ Éditeur,
Talbot L.(2009). L’évaluation formative. Comment
évaluer pour remédier aux difficultés d’apprentissages.
Paris Armand Colin
FERGUSON M. (décembre 2005) N°438 - Dossier
"L’évaluation des élèves" http://www.cahiers-pedagogiques.com/Une-evaluation-respectueuse-des-individualites
© Copyright-VIPAPHARM. All rights reserved