Establishing trust in automated reasoning

Edit this page

Abstract

Since its beginnings in the 1940s, automated reasoning by computers has become a tool of ever growing importance in scientific research.So far, the rules underlying automated reasoning have mainly beenformulated by humans, in the form of program source code. Rulesderived from large amounts of data, via machine learning techniques,are a complementary approach currently under intense development.The question of why we should trust these systems, and the resultsobtained with their help, has been discussed by philosophers of sciencebut has so far received little attention by practitioners. The presentwork focuses on independent reviewing, an important source of trustin science, and identifies the characteristics of automated reasoningsystems that affect their reviewability. It also discusses possible stepstowards increasing reviewability and trustworthiness via a combinationof technical and social measure

Link to resource: https://doi.org/10.31222/osf.io/nt96q

Type of resources: Reading

Education level(s): College / Upper Division (Undergraduates)

Primary user(s): Student, Teacher, Researcher

Subject area(s): Social Science

Language(s): English