The document consists of a series of questions related to artificial intelligence concepts, including definitions of strong and weak AI, the Turing test, rational agents, and various contributions from fields like philosophy, mathematics, and psychology. It also covers different types of agents, environments, search algorithms, and problem-solving techniques in AI. The questions are structured in a multiple-choice format, aiming to assess knowledge on AI fundamentals and methodologies.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
20 views7 pages
AI MCQs
The document consists of a series of questions related to artificial intelligence concepts, including definitions of strong and weak AI, the Turing test, rational agents, and various contributions from fields like philosophy, mathematics, and psychology. It also covers different types of agents, environments, search algorithms, and problem-solving techniques in AI. The questions are structured in a multiple-choice format, aiming to assess knowledge on AI fundamentals and methodologies.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7
1.
Strong AI refers to:
a. Computers thinking like humans. b) Adding thinking-like features to computers. c) Building faster processors. d) None of the above. 2. Weak AI involves: a. Building faster processors. b) Adding thinking-like features to computers. c) Computers thinking like humans. d) None of the above. 3. The Turing test is designed to: a. Test for computer speed. b) Test for intelligent behavior. c) Test for language understanding. d) None of the above. 4. How can we determine how humans think? a. Reading books, watching videos, attending lectures. b) Introspection, psychological experiments, brain imaging. c) Talking to experts, conducting surveys, analyzing data. d) None of the above. 5. The main obstacle to the "laws of thought" approach is: a. Informal knowledge. b) Lack of data. c) Slow computers. d) None of the above. 6. A rational agent is: a. Entity that only perceives. b) Entity that perceives and acts. c) Entity that only acts. d) None of the above. 7. The rational-agent approach is advantageous because it is: a. More general than "laws of thought". b) Faster than other approaches. c) Easier to implement. d) None of the above. 8. Philosophy contributes to AI by providing: a. Logic and methods of reasoning. b) Building faster computers. c) Developing new languages. d) None of the above. 9. Mathematics contributes to AI through: a. Formal representation and proof algorithms. b) Building faster computers. c) Developing new languages. d) None of the above. 10. Economics contributes to AI with: a. Building faster computers. b) Utility and decision theory. c) Developing new languages. d) None of the above.
11. Neuroscience's main contribution to AI is:
a. Physical substrate for mental activity. b) Building faster computers. c) Developing new languages. d) None of the above. 12. Psychology contributes to AI by studying: a. Building faster computers. b) Developing new languages. c) Phenomena of perception and motor control. d) None of the above. 13. Computer engineering's contribution to AI is: a. Developing new languages. b) Creating new algorithms. c) Building fast computers. d) None of the above. 14. Control theory contributes to AI by: a. Building faster computers. b) Developing new languages. c) Designing systems that maximize an objective function over time. d) None of the above. 15. Linguistics contributes to AI with: a. Building faster computers. b) Developing new languages. c) Knowledge representation and grammar. d) None of the above. 16. The Boolean circuit model of the brain was developed by: a. Turing. b) Samuel. c) McCulloch and Pitts. d) None of the above. 17. "Computing Machinery and Intelligence" was written by: a. McCulloch. b) Samuel. c) Turing. d) None of the above. 18. Early AI programs in the 1950s focused on: a. Building faster computers. b) Developing new languages. c) Checkers, Geometry, Logic. d) None of the above. 19. The Dartmouth meeting in 1956 was significant for: a. Development of new algorithms. b) Building faster computers. c) Adoption of "Artificial Intelligence". d) None of the above. 20. Neural network research in 1966-74: a. Almost disappeared. b) Became very popular. c) Led to new discoveries. d) None of the above. 21. AI research in 1969-79 focused on: a. Building faster computers. b) Developing new languages. c) Knowledge-based systems. d) None of the above. 22. Terry Winograd's Shrdlu dialogue system was significant for: a. Building faster computers. b) Developing new languages. c) Early development of natural language understanding. d) None of the above. 23. The expert systems industry in 1988-93 experienced: a. Industry busts. b) Industry booms. c) Led to new discoveries. d) None of the above. 24. AI research in 1985-95 focused on: a. Building faster computers. b) Developing new languages. c) Neural networks. d) None of the above. 25. IBM Deep Blue in 1997 was significant for: a. Developed new algorithms. b) Built faster computers. c) Beat the World Chess Champion. d) None of the above. 26. IBM Watson in 2011 was significant for: a. Developed new algorithms. b) Built faster computers. c) Won Jeopardy. d) None of the above. 27. In the context of artificial intelligence, an agent is best defined as: a. An entity that acts upon sensors. b) An entity that perceives its environment. c) An entity that ignores its environment. d) None of the above. 28. In AI, the term "percept" refers to: a. The sensory inputs received by an agent. b) The actions performed by an agent. c) The environment in which the agent operates. d) None of the above. 29. An agent's percept sequence refers to: a. The sequence of future actions it intends to take. b) The complete history of its sensory inputs. c) Its current internal state. d) None of the above. 30. An agent's behavior in AI is formally described by its: a. Environment. b) Agent function. c) Sensors. d) None of the above. 31. The agent program in AI can be best described as: a. The set of sensors used by the agent. b) The concrete implementation of an agent function. c) The environment in which the agent operates. d) None of the above. 32. In the classic vacuum-cleaner world example, the agent perceives: a. The color of the squares in the environment. b) The location of the agent and the presence/absence of dirt. c) The temperature of the squares in the environment. d) None of the above. 33. A rational agent in AI is defined as one that: a. Ignores its environment. b) Does the right thing. c) Acts randomly. d) None of the above. 34. In AI, the performance measure is used to: a. Evaluate the agent's sensors. b) Evaluate the success of the agent's behavior in achieving its goals. c) Evaluate the agent's actions. d) None of the above. 35. Rationality in AI depends on which of the following factors? a. The agent's sensors only. b) The agent's performance measure, knowledge, available actions, and percepts. c) The agent's actions only. d) None of the above. 36. In the context of AI, omniscience refers to: a. Knowing everything about the environment and its future. b) Acting randomly. c) Ignoring the environment. d) None of the above. 37. A rational agent in AI should ideally be capable of: a. Ignorance. b) Autonomy and learning. c) Dependence on its designer. d) None of the above. 38. The task environment in AI is typically characterized by: a. Sensors only. b) PEAS (Performance, Environment, Actuators, Sensors). c) Actions only. d) None of the above. 39. A fully observable environment in AI is one where: a. The agent has no sensors. b) The agent has complete access to the current state of the environment. c) The agent ignores the environment. d) None of the above. 40. A single-agent environment in AI is characterized by: a. An agent solving a problem without interaction from other agents. b) Multiple agents competing with each other. c) Multiple agents cooperating with each other. d) None of the above. 41. A deterministic environment in AI is one where: a. The agent's actions have no effect on the environment. b) The next state of the environment is completely determined by the current state and the agent's action. c) The environment changes randomly. d) None of the above. 42. An episodic task environment in AI is characterized by: a. The agent's experience being divided into discrete episodes, with no impact of past actions on future episodes. b) The agent's actions having a significant impact on future states. c) The environment being continuous. d) None of the above. 43. A static environment in AI is one where: a. The environment does not change while the agent is deliberating. b) The environment changes randomly and unpredictably. c) The agent ignores changes in the environment. d) None of the above. 44. A discrete environment in AI is one where: a. The environment is divided into a set of discrete states or locations. b) The environment is continuous. c) The agent ignores the environment. d) None of the above. 45. A known environment in AI is one where: a. The outcomes of all actions are given to the agent. b) The agent has no knowledge about the environment. c) The environment changes randomly. d) None of the above. 46. The structure of an agent in AI can be generally described as: a. Sensors + environment. b) Actions + percepts. c) Architecture + program. d) None of the above. 47. Simple reflex agents select actions based on: a. The current percept only. b) The entire percept history. c) Their goals. d) None of the above. 48. Model-based reflex agents maintain: a. An internal state that represents the world. b) No internal state. c) Only the current percept. d) None of the above. 49. Goal-based agents require: a. Information about their goals. b) No goals. c) Only the current percept. d) None of the above. 50. Utility-based agents are designed to address: a. Environments with partial observability and stochasticity. b) Deterministic environments. c) Fully observable environments. d) None of the above. 51. Learning agents operate in: a. Unknown environments. b) Known environments. c) Static environments. d) None of the above. 52. The performance element of a learning agent is primarily responsible for: a. Selecting actions. b) Making improvements to the agent's knowledge. c) Providing feedback to the agent. d) None of the above. 53. The learning element of a learning agent is primarily responsible for: a. Making improvements to the agent's knowledge. b) Selecting actions. c) Ignoring feedback. d) None of the above. 54. The critic in a learning agent is responsible for: a. Providing feedback on the agent's performance. b) Selecting actions. c) Making improvements to the agent's knowledge. d) None of the above. 55. The role of the problem generator in a learning agent is to: a. Suggest exploratory actions to improve the agent's knowledge. b) Provide feedback to the agent. c) Select actions for the agent to take. d) None of the above. 56. A vacuum-cleaner world agent is considered rational when it: a. Maximizes its performance measure over time. b) Acts randomly. c) Ignores its environment. d) None of the above. 57. Which of the following best describes a problem-solving agent? a) Reacts directly to percepts. b) Finds sequences of actions to achieve goals. c) Learns from experience. d) Operates in fully observable environments. 58. What is the primary difference between a goal-based agent and a simple reflex agent? a) Goal-based agents have sensors. b) Goal-based agents have actuators. c) Goal-based agents have internal states representing goals. d) Goal-based agents operate in deterministic environments. 59. What is a state space in the context of problem-solving? a) The set of all possible actions. b) The set of all possible world configurations. c) The set of all possible goals. d) The set of all possible sensors. 60. What is a search algorithm in AI? a) A method for finding the fastest computer. b) A systematic way to explore the state space. c) A technique for learning from data. d) A method for representing knowledge. 61. Which of the following is NOT a common search strategy? a) Breadth-first search b) Depth-first search c) Linear regression d) search 62. What is the primary advantage of breadth-first search? a) It always finds the shortest path. b) It has low memory requirements. c) It is guaranteed to find a solution if one exists. d) It explores all nodes at a given depth before moving to the next depth. 63. What is the primary disadvantage of depth-first search? a) It can get stuck in infinite loops. b) It requires a lot of memory. c) It always finds the longest path. d) It is computationally expensive. 64. What is a heuristic function in search algorithms? a) A measure of the cost of an action. b) An estimate of the distance to the goal. c) A way to represent the problem state. d) A method for exploring the search space. 65. Which search algorithm uses a heuristic function to guide the search? a) Breadth-first search b) Depth-first search c) A search* d) Uniform-cost search 66. What is the key difference between informed and uninformed search algorithms? a) Informed search algorithms use domain-specific knowledge. b) Uninformed search algorithms are faster. c) Informed search algorithms always find the optimal solution. d) Uninformed search algorithms use heuristics. 67. What is a minimax algorithm used for? a) Playing single-player games. b) Playing two-player, zero-sum games. c) Solving constraint satisfaction problems. d) Learning from data. 68. What is the concept of alpha-beta pruning? a) A method for improving the efficiency of minimax search. b) A way to guarantee finding the optimal solution. c) A technique for handling uncertainty. d) A method for representing game states. 69. What are Monte Carlo tree search methods used for? a) Solving linear programming problems. b) Playing games with large branching factors. c) Representing knowledge in a declarative form. d) Learning from reinforcement learning. 70. What is a constraint satisfaction problem (CSP)? a) A problem that involves finding the shortest path between two points. b) A problem that involves finding a set of values that satisfy a set of constraints. c) A problem that involves learning from data. d) A problem that involves controlling a robot. 71. What is a constraint in the context of CSPs? a) A variable that must be assigned a value. b) A restriction on the values that variables can take. c) A solution to the problem. d) A search algorithm. 72. What is backtracking search used for in CSPs? a) Finding the initial state. b) Finding a solution by systematically trying different values. c) Evaluating the quality of a solution. d) Learning from past experiences. 73. What are the four key components of a task environment for an AI agent? a) Sensors, actuators, goals, and performance measure. b) Performance measure, environment, actuators, sensors. c) Knowledge base, inference engine, sensors, and actuators. d) State space, actions, goals, and performance measure. 74. What is a fully observable environment? a) An environment where the agent has access to all relevant information. b) An environment where the agent has no sensors. c) An environment that changes randomly. d) An environment where the agent's actions have no effect. 75. What is a deterministic environment? a) An environment where the next state is completely determined by the current state and the agent's action. b) An environment where the next state is random. c) An environment where the agent has no control. d) An environment where the agent's actions have no effect. 76. What is an episodic environment? a) An environment where the agent's experience is divided into discrete episodes. b) An environment where the agent's actions have a long-term impact. c) An environment that changes continuously. d) An environment where the agent has no memory. 77. What is a simple reflex agent? a) An agent that maintains an internal state. b) An agent that learns from experience. c) An agent that selects actions based on the current percept only. d) An agent that plans its actions in advance. 78. What is a model-based reflex agent? a) An agent that acts based on its current goals. b) An agent that maintains an internal model of the world. c) An agent that learns from every action it takes. d) An agent that operates in a fully observable environment. 79. What is a goal-based agent? a) An agent that acts based on its current percepts only. b) An agent that acts based on its current state and the desired goal state. c) An agent that learns from every action it takes. d) An agent that operates in a deterministic environment. 80. What is a utility-based agent? a) An agent that selects actions based on a fixed set of rules. b) An agent that selects actions based on a fixed set of goals. c) An agent that selects actions based on a measure of how well they achieve its goals. d) An agent that learns from every action it takes. 81. What is a learning agent? a) An agent that operates in a fully observable environment. b) An agent that improves its performance over time. c) An agent that has no memory. d) An agent that acts based on a fixed set of rules. 82. What is the minimax algorithm used for? a) Solving constraint satisfaction problems. b) Finding the optimal move in two-player, zero-sum games. c) Learning from data. d) Representing knowledge in a declarative form. 83. What is alpha-beta pruning? a) A method for improving the efficiency of minimax search. b) A way to guarantee finding the optimal solution in all games. c) A technique for handling uncertainty in game playing. d) A method for representing game states. 84. What are Monte Carlo tree search methods? a) A class of algorithms that use random sampling to explore the game tree. b) A set of techniques for playing games with perfect information. c) A method for representing game states using graphs. d) A way to guarantee finding the optimal solution in all games.