Testing the participants’ knowledge in online education should be carefully prepared, as improvisations are not wanted in this delicate area . It is based on (a) well planned study guidelines , (b) expected results and (c) criteria for evaluation of the participants’ results.
Knowledge assessment is usually the most stressful part of online subjects/courses for most participants. This can be alleviated by informing the potential participants about the content of a course/subject before they enrol . Apart from that, at the beginning of their online education, the participants have to be familiarised with the system of work and the way knowledge is assessed. This means that they should be informed of all the factors influencing the final grade , and precisely specify the desired/required number of points the participant will get, as well as the point evaluation scale for every element that will be evaluated (e.g. participation in activities, test results, project, participation in discussions, final tests etc).
The deadline to finish each element that has a bearing on the final grade should be known in advance, whereby the participants have to adhere to the set deadlines. Shortening of the deadlines is not recommendable since the participants usually have other educational and/or work commitments which they have adapted to the earlier published deadlines. Consequently the shortening of deadlines will cause significant discontent. On the other hand, extending the deadlines can be accepted much better, but might show inconsistency of the instructor/mentor if they practice it too often or if the extension is allowed even for simple assignments which can be done on time. Apart from that, the extension of deadlines can extend the overall duration of the course, which can cause discontent of the better and more conscientious participants who would probably like to finish it in time.
It should be pointed out that some e-education pedagogical models do not have firmly defined dates and deadlines for handing in elements that have bearing on the final grade, and sometimes not even a planned (fixed) order to perform these elements. While in the previously defined case of precisely defined deadlines various educational assignments/activities for participants are mostly "synchronised" and are performed at the exact hour, day or week , when there are no fixed deadlines the participants pace their work themselves in accordance with their wishes and possibilities and sometimes even determine the order of topics/lessons which they are trying to master. In most cases of online instruction in higher education institutions the graded elements are “synchronised”, while in continuous education for employees of business organisations asynchronous work of the participants with the LMS is much more common.
During the participants’ work with the LMS there are sometimes attempts to measure how active the participant is in using certain educational materials and to gain insight into the continuity of their work. The lecturer/instructor can usually gain insight into the work of the participants on daily, weekly or monthly basis through the LMS.
From the position of the lecturer/instructor tests are the most elegant way to examine knowledge as the LMS provides them with very useful information about the success of an individual participant. They can also obtain different data about the participant in comparison to other participants, up to the level of single questions in a test.
Potential difficulties in the application of knowledge testing are related to possible manipulations of the LMS by the participants , such as e.g. asking for help of a third person, using different aids (textbooks etc.) during the testing etc. It is for those reasons that the testing should be conducted in certified institutions which control the access to and work on the computers, as well as implement limitations in the use of prohibited forms of communication ( forum, chat etc. ) which the more competent participants could use during the test.
We have already mentioned that projects as knowledge assessment methods are very useful, but also very demanding for the lecturer/instructor. Even with projects there is a possibility that other people could be included in their solving and development. With team projects there is a danger that individual members do not cooperate in the work of the team equally (in quantity or quality), while the team shows solidarity by claiming in the project report they contributed more that they really had. In both cases the instructor/mentor can determine the degree of authorship contribution in the project of individual participants only by talking to them (e.g. through videoconference).
Discussion participation (e.g. in forums) can be easily quantitatively determined through the support provided by the LMS. For example, one of the indicators of the activity of a certain participant can be the number of messages a participant had read and posted. However, qualitative evaluation requires more time as the lecturer/instructor has to evaluate the contributions of the participants in discussions and projects, which requires regular reading of their posts and constant evaluation of their quality, as well as detailed analysis of their projects and evaluation of individual participant’s contributions.
Tips & tricks: Determine the most appropriate knowledge assessment methods bearing in mind the educational goals, the final desired level of knowledge and skills of the participants, available technology, the time an instructor/lecturer has at their disposal for evaluation of the student results, as well as acceptability of the testing methods and the planned deadlines for the participants. After that, announce to the participants in time all the elements that will influence their final grade in an online course, as well as predicted deadlines for certain elements to be finished. Evaluate the knowledge assessment methods from time to time against the engagement of the participants, acquired knowledge, acceptability for the users and the workload of the lecturer/instructor that should evaluate them.