Wikipedia:Reference desk/Archives/Science/2020 March 13

From Wikipedia, the free encyclopedia
Science desk
< March 12 << Feb | March | Apr >> March 14 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 13[edit]

Psychological Research on Rules and Domain Expertise[edit]

I'm developing a presentation for a workshop on artificial intelligence. My presentation is an overview of knowledge representation in AI. One of the points I want to make is that one of the reasons that rule-based systems were first used was that there is empirical evidence that domain experts often represent their expertise in rule like forms. This is something I know I've heard several times and I think I've read but it's been a long time since I did work on expert systems and I can't remember where I read it (or if perhaps this was something that people often said even though there isn't good research to back it up). Any and all pointers would be appreciated. --MadScientistX11 (talk) 15:03, 13 March 2020 (UTC)[reply]

See the article Knowledge representation and reasoning and a tutorial about knowledge representation. DroneB (talk) 15:40, 13 March 2020 (UTC)[reply]
Thanks. Ironically, I wrote a great deal of the KR article but I forgot there was a good reference from that Schank article. That's what I needed. --MadScientistX11 (talk) 20:00, 20 March 2020 (UTC)[reply]
That goes way back to the early books on AI, like the one by Nilsson. The trick is what to do when multiple rules saying conflicting things apply to a situation. Deep learning seems to be the first thing pointing at a satisfactory answer. 2601:648:8202:96B0:54D9:2ABB:1EDB:CEE3 (talk) 17:30, 13 March 2020 (UTC)[reply]
There is no question that Machine Learning is very powerful and is making great advances to AI now. But I don't agree that deep learning is relevant to my question. One of the big issues with Deep Learning (or most other approaches to Machine Learning) is that the knowledge is not explicitly represented in a way that is intuitive to a human the way it is in rules. It's why explanation is a major open question for ML. With a set of symbolic rules it is easy to look at a rule trace and say things like "the system concluded Diagnosis X because of the patient's fever, age, and white blood count" with ML the knowledge is buried in layers of neural nets or parameters to algorithms such as gradient descent. Usually, even the developers don't know exactly how the resulting Artificial Neural Net or algorithm maps to the training data. While there are certainly things that human conclude based on neural nets (e.g., face recognizers, phoneme recognition, usually finding a signal in noisy data) machine learning isn't the same kind of explicit knowledge representation as an ontology or rule base and is not a model (at least I know of no research that supports this) for the way domain experts solve problems. --MadScientistX11 (talk) 20:00, 20 March 2020 (UTC)[reply]