Literature

Reading the article on college student web use (Metzger, Flanagin and Zwarun 2003), it was prevalent that it had been written about 15 years ago. I felt like the study and the findings still focused very much on the academic bias towards the internet in general and everything that is not in print medium in particular. The most striking finding in my opinion was that nonstudents indicated that they verified online information more than students although students found the internet less credible than newspaper, TV and magazines (p.286). Later on in the conclusion, it is stated that this might be due to students being more familiar with the internet in general and seeing it as means to an end. The study also mentions, that student rank usefulness for improving the quality of their work low and it suggests, that quantity and ease of finding results is more important to the students than quality.

I think these findings and the general view of the internet as a threat to academic quality is quite outdated. Nowadays, there are so many peer-reviewed online journals and publications as well as databases, that it cannot be said that online research produces lower quality work. However, the big question is still how students learn to distinguish between different types of resources and evaluate their credibility. Especially in an age of fake news, not only students, but everyone, should approach online information mindful of possible misinformation.

Regarding Aggarwal’s article (Aggarwal et al. 2014), I found it a very interesting approach to automate credibility evaluation. One thing I was wondering about, is if all the 6 factors (website type, popularity rank, last update, sentiment analysis, reputation, review category) that websites are assessed on should be given equal weight in the calculation of the overall score. Rankings, reviews and popularity might be biased depending on which community of users is likely to access the website and what their opinions and view are. By using external real-time databases as they do, this problem is most likely diminished. WebCAST sounds like a great tool and the fact that it is open source is a great step towards assisting people to navigate the web in terms of credibility of possible resources. It can be a great tool for teaching and training purposes as suggested by the authors, however, I would be careful with completely relying on an automated programme to evaluate the credibility of my resources. Human judgment and reasoning are, in my opinion, still the most trustworthy, if time consuming, tools we can use.

My method of evaluation

My method so far

I have to say that so far, I have never really had a specific method or system which I could put into words. I tend to play it safe and use primarily resources provided by the institution I am working in/for and ones that have a good reputation in academic circles and among my peers. This approach is playing it safe and it also bears the dangers of researching in a bubble and not taking into account resources outside my field.

Establishing a Method

Across different institutional guidelines, important points to consider when evaluating resources are accuracy, authority, objectivity/bias, currency and coverage (they are also mentioned in Metzger’s article). I would also add cross-reference as a category to determine how the resource interacts with other resources and if it is used by resources I already evaluated as trustworthy/ relevant. For my own information, I add relevancy as a category, to determine how relevant the resource if for my personal research interest.

Categories

This gives me the following prerequisites for a resource to be approved in each category, drawing on Metzger:

accuracy: the resource gives information that can be verified using other sources and is reliable, it is free from errors (regarding content, layout, language), it includes a bibliography or list of sources used

authority: authors and contributors are indicated, their qualifications are listed or can be easily accessed

objectivity/bias: the resource provides facts and indicates when a subjective opinion is given, the purpose of the resource is to inform, the resource has no affiliation to biased institutions/ resources

currency: the information provided is up-to-date and is constantly being reviewed and updated, the date of publication is clearly indicated

coverage: the information provided goes below the surface of the topic, is comprehensive in its field, provides references or outlooks for further research

cross-reference: the resource is being referenced by and references other resource which can be evaluated as trustworthy/ credible, it is part of a network/ community of information in its field of expertise. The resource does not reference itself except for organizational purposes. This category can also include the evaluator’s personal experience.

relevancy: the resource is relevant for my research interest

Methodolgy

I evaluated the chosen resources according to those 6 categories on a scale from 1 – 10, 10 being “completely approved in this category” and 1 being “not at all approved in this category”. From the values given, an overall score is calcuated according to which the resources are ranked. The calculation is done as follows: (x1+x2…..+x7)/ n, with x being the value in the respective category and n the number of categories (7). The final score is rounded to the second decimal.

Evaluation:

Source Accuracy Authority Object.

No bias

Currency Coverage Cross-

Ref.

Rele-

vancy

Score
1 9 8 6 9 7 7 7 7.57
2 10 8 9 6 8 8 8 8.14
3 10 8 9 8 9 7 10 8.71
4 9 7 7 10 8 8 10 8.43
5 9 8 6 5 5 5 9 7.71
6 7 5 6 10 7 5 9 7.00
7 10 8 9 8 9 7 10 8.71
8 10 10 9 8 10 10 10 9.57
9 8 7 8 5 7 9 7 7.29
10 10 7 7 10 3 9 6 7.43
11 8 8 7 9 6 9 8 7.86
12 8 9 4 10 0 3 0 4.86
13 10 9 9 9 9 10 10 9.43

Ranking:

Rank Source
1 8
2 13
3 3, 7
4 4
5 2
6 11
7 5
8 1
9 10
10 9
11 6
12 12

List of Sources

1 Berry, D.M. and Galloway, A.R. 2016. A Network is a Network is a Network: Reflections on the Computational and the Societies of Control. Theory, Culture & Society, 33(4), pp.151–172.
2 Becker, H.S. 1974. Art As Collective Action. American Sociological Review, 39(6), pp.767–776.
3 DiMaggio, P. 2011. Chapter 20: Cultural Networks. IN: J. Scott and P. J. Carrington (eds.) The SAGE Handbook of Social Network Analysis. London ; Thousand Oaks, Calif: SAGE, pp. 286–300.
4 Porras, S. 2017. Keeping Our Eyes Open: Visualizing Networks and Art History. Artl@s Bulletin, 6(3), pp.42–49.
5 Finkelstein, S. 1966. Reviewed Work: Canvases and Careers: Institutional Changes in the French Painting World by Harrison C. White, Cynthia A. White. Science & Society, 30(2), pp.238–241.
6 Using Onodo to Visualise Company Ownership Networks. 2018. Publish What You Pay. Available from: http://www.publishwhatyoupay.org/pwyp-resources/using-onodo-visualise-company-ownership-networks-2/ [Accessed November 3, 2018].
7 Drucker, J. 2013. Is There a “Digital” Art History? Visual Resources, 29(1–2), pp.5–13.
8 Klinke, H. and Surkemper, L. 2015. International journal for digital art history. Issue 1, Issue 1,. Munich: Graphentis Verlag.
9 Promey, S.M. and Stewart, M. 1997. Digital Art History: A New Field for Collaboration. American Art, 11(2), pp.36–41.
10 dahjournal 2018. Digital Art History Journal Twitter Account. @dahjournal.
11 Digital Art History (Getty Foundation). Available from: http://www.getty.edu/foundation/initiatives/current/dah/index.html [Accessed September 20, 2018].
12 Ambrose, K. 2015. Digital Art History. The Art Bulletin, 97(3), pp.245–245.
13 Manovich, L. 2015. Data Science and Digital Art History. International Journal for Digital Art History, 1(1), pp.13–35.

My experiences with this evaluation:

I have never done this sort of categorization of my resources before and I find it a great way of separating the what from the chaff and finding out what resources are worth looking into in more depth. It also gives me a feeling of security regarding the quality of the resources I decide to use for my research. What I find challenging is the great variety of types of resources and bringing them together in this rather rigid system of categorization. I also found that the overall score I give does not necessarily reflect how useful the resources are for my research, as the category “relevancy” only makes up 1/7 of the rating. Furthermore, sometimes it can be interesting and necessary to use biased resources in order to give an overview of the opinion on a topic, so those resources would have to be looked at from a different angle.


Bibliography

Aggarwal, S., Van Oostendorp, H., Reddy, Y.R. and Indurkhya, B. 2014. Providing Web Credibility Assessment Support. IN: Proceedings of the 2014 European Conference on Cognitive Ergonomics – ECCE ’14. the 2014 European Conference. Vienna, Austria: ACM Press, pp. 1–8.

Metzger, M.J., Flanagin, A.J. and Zwarun, L. 2003. College student Web use, perceptions of information credibility, and verification behavior. Computers & Education, 41(3), pp.271–290.