Responsible AI
Responsible AI
Responsible AI
Fig 2A Fig 2B
a. Model generating image Fig 1A
b. Model generating image Fig 1B
c. Insufficient Information
d. Parameters have no importance
2.
Fig 3
What language-based task is being performed in Fig 2?
a. Common-sense QA
b. Planning and Strategic Thinking
c. Paraphrasing
d. Text Generation
3.
Fig 4
Which of the following tasks by AI surpasses the human
benchmark according to Fig 4?
a. Language Understanding
b. Code Generation
c. Neither
d. Both
4.
Fig 5A - Task A
Fig 5B - Task B
Fig 6A
Fig 6B
Is there any issue with the model’s response in Fig 5A and Fig
5B?
a. The model is biased about career
b. The model is honest
c. The model has no issue
d. The model is biased towards gender
6.
A B C D
Identify the source, destination, mask, and output for a deepfake
generation.
a. A: Source, B: Destination, C: Mask, D: Output
b. A: Destination, B: Mask, C: Source, D: Output
c. A: Mask, B: Source, C: Destination, D: Output
d. None of the above
7.
Fig 8
What issue caused the catastrophe in Fig 6?
a. Bias in Algorithm
b. Variance in Algorithm
c. Error in Algorithm
d. Slowness in Algorithm
8. Which of the following are risks associated with AI.
a. Malicious use
b. AI plateau
c. Organizational risks
d. Self-driving cars
B
____________________________________________________
C
____________________________________________________
22. What is the correct order of the steps involved in reducing the
risk associated with AI systems.
a. Respond -> Prioritize -> Identify -> Improve
b. Prioritize -> Identify -> Improve -> Respond
c. Respond -> Prioritize -> Identify -> Improve
d. Identify -> Prioritize ->Respond -> Improve
1. A robust model provides unreliable predictions when met with adversaries. Which
all of the following are common adversaries in this context?
a. Distribution Shift
b. Overfitting
c. Noisy Data
d. Model Compression
e. Gradient Descent
f. Data Augmentation
3. To train a model that achieves accuracy in the range of 95% to 98%, you need 1GB
of data. To get 100% accuracy, you need 120GB of data. This idea is similar to
which of the following principles:
a. Sigmoid Distribution
b. Power law distribution
c. Uniform distribution
d. Gaussian Distribution
e. Long-tailed distribution
8. To check if an image classification model is robust, identify all the training and
testing processes that can be used from below. The three datasets are ImageNet,
AugMix, Mixup
a. Train on AugMix and test on AugMix
b. Train on AugMix and test on ImageNet
c. Train on ImageNet and test on AugMix
d. Train on Mixup and test on ImageNet
e. Train on ImageNet and test on ImageNet
f. Train on ImageNet and test on Mixup
11. The introduction of new lighting conditions in an image dataset would most likely
cause?
a. Distribution Shift
b. Concept Shift
c. Model Decay
d. Feature Extraction
12. Identify the goal(s) of a model when training with RLHF is as follows:
a. Maximize the penalty
b. Maximize the reward
c. Minimize the penalty
d. Minimize the reward
15. What is the constraint under which the model optimization is done in RLHF to
ensure that the model doesn’t diverge too far from the pretrained model?
a. KL Divergence
b. L2 Regularization
c. Entropy Maximization
d. Gradient Clipping
16. What are the issues with reward modelling?
a. Reward shrinking - gradually decreasing rewards over time
b. Reward misalignment - reward signals do not align with the desired outcomes
c. Reward saturation - model stops learning after a certain reward threshold is
reached
d. Reward consistency - ensuring rewards are uniformly distributed
e. Reward hacking - maximise reward with imperfect proxy and forget the goal
17. Direct Preference of Optimization works in which one of the following ways:
a. RLHF without rewards model
b. RLHF without human feedback
c. RLHF without reinforcement learning
d. RLHF without KL divergence
19. Identify the way(s) to maintain transparency in the context of RLHF to avoid safety
and alignment issues.
a. Quality-assurance measures for human feedback
b. Minimize the involvement of humans to reduce biases
c. Use black-box algorithms to simplify the process
d. Avoid documenting the feedback process to save time
e. Limit the diversity of human feedback to ensure consistency
f. Have a powerful loss function when optimizing the reward model
24. What are the different defence mechanism(s) against poisoning attacks?
a. Biasing
b. Filtering
c. Unlearning
d. Representation engineering
e. AutoDebias
UNLEARNING
1. What kind of content information do you want to remove from the model data?
a. Biased or discriminatory data
b. Useful patterns and trends
c. General public data
d. Random noise
e. Personally identifiable information
f. Valid and accurate data
3. Identify the steps involved in the exact unlearning as discussed in the course.
a. Isolate the data -> shard the data -> slice the data -> aggregate the data
b. Aggregate the data -> isolate the data -> slice the data -> shard the data
c. Shard the data -> Slice the data -> Isolate the data -> Aggregate the data
d. Shard the data -> Isolate the data -> Slice the data -> Aggregate the data
e. Isolate the data -> slice the data -> shard the data -> aggregate the data
5. How should the original model and the model after the below unlearning methods
behave?
1) exact unlearning
2) approximate unlearning
a. 1) distributionally identical 2) distributionally identical
b. 1) distributionally close 2) distributionally close
c. 1) distributionally identical 2) distributionally close
d. 1) distributionally close 2) distributionally identical
6. How does unlearning via differential privacy work?
a. check whether an adversary can reliably tell apart the models before unlearning
and after unlearning
b. check whether the model can output private and sensitive information before and
after unlearning
c. check whether the model's predictions become more consistent and stable for
private information before and after unlearning.
d. check whether an adversary can identify the differences in the distribution of
output data of the model before and after unlearning
10. In the scenario of ask for unlearning, what kind of things can be easily unlearned?
a. Hate speech
b. Toxic content
c. Factual Information
d. Sensitive information
11. When evaluating the quality of unlearning using Membership Inference Attack, which of
the following scenarios implies that the unlearning is successful?
a. The accuracy increases on the forget set
b. The accuracy drops on the forget set
c. The accuracy stays the same on the forget set
d. The accuracy increases on the test set
e. The accuracy drops on the test set
f. The accuracy stays the same on the test set
14. What idea does the paper Corrective Machine Learning build upon?
a. Not all poisoned data can be identified for unlearning
b. Identifying and removing a small subset of poisoned data points is sufficient to
ensure the model's integrity
c. enhancing the model's ability to handle completely new, unseen poisoned data
d. The accuracy of the model improves proportionally with the amount of data
removed, regardless of whether it is poisoned or not
e. adding redundant data to the dataset to counteract the effects of poisoned data.
f. Not all poisoned data can be identified for unlearning
15. Identify all the methods that act as the baseline for the TOFU benchmark dataset
a. Gradient Descent
b. Gradient Ascent
c. Gradient Difference
d. Gradient boosting
e. Gradient Clipping
16. The WMDP benchmark tests on unlearning what kind of information?
a. Biosecurity
b. High-school biology
c. Hate speech on Twitter
d. Crime data
17. You are in charge of building graph models trained on Instagram social networks to
provide content recommendations to users based on their connections’ content. You
realize that a particular user in the network is leading to toxic content recommendations.
What kind of unlearning would you use in this scenario to prevent the recommendation
of toxic content?
a. Node feature unlearning
b. Node unlearning
c. Edge Unlearning
d. Subgraph unlearning
11. Which of the following is the correct setting for contrastive learning?
a. Irrespective of the sentences, minimise the distance between their embeddings
b. Irrespective of the sentences, maximise the distance between their embeddings
c. If sentences are similar, minimise the distance between their embeddings
d. If sentences are different, minimise the distance between their embeddings
e. If sentences are different, maximise the distance between their embeddings
f. If sentences are different, minimise the distance between their embeddings
BIAS - II : Assignment 5
4. Given the sentences "A man is walking" and "A nurse is walking," a higher similarity
score indicates:
a. More bias
b. Less bias
c. No bias
d. Ambiguous bias
5. Pointwise Mutual Information (PMI) is used to measure the association between two
events. Which of the following statements best describes PMI?
a. PMI measures the likelihood of two events occurring together compared to the
likelihood of their independent occurrence.
b. PMI is a measure of the difference between the probabilities of two events
occurring together and separately.
c. PMI calculates the expected frequency of two events occurring together.
d. PMI assesses the correlation between two events based on their joint probability
distribution.
6. Consider the following prompt: "The image shows a subject with occupation
'{occupation}'. What is the gender of the central subject in this image?"
This prompt provides specific information about the subject's occupation in the image
before asking about their gender. Based on this description, how would you classify the
prompt?
a. Informed Task
b. Blind Task
c. Follow-up Task
d. Generic Task
9. In the paper AutoDebias, we search for bias by _________ JS Divergence and debias
by ___________ JS Divergence.
a. Maximizing, Minimizing
b. Minimizing, Maximizing
c. Maximizing, Not Changing
d. Not Changing, Minimizing
2. Between a randomized response (with epsilon>0) and a fair coin toss response, which
algorithm would you use to preserve privacy but have a better utility?
a. Randomized response because the chance of falsehood is 50%
b. Randomized response because the chance of truth is greater than 50%
c. Randomized response because the chance of falsehood is greater than 50%
d. Coin toss response because the chance of falsehood is 50%
e. Coin toss response because the chance of truth is greater than 50%
f. Coin toss response because the chance of falsehood is greater than 50%
3. Consider the equation in the context of privacy guarantees (The notations used are the
same as used during the lecture).
−ε ε
𝑃(𝑅𝑅(𝑥') = 𝑏) * 𝑒 ≤ 𝑃(𝑅𝑅(𝑥) = 𝑏) ≤𝑃(𝑅𝑅(𝑥') = 𝑏) * 𝑒
To maximize the privacy gains, which of the following values should be changed and
how?
a. ε should be maximum for privacy, ε should be minimum for utility
b. ε should be minimum for privacy, ε should be minimum for utility
c. ε should be maximum for privacy, ε should be maximum for utility
d. ε should be minimum for privacy, ε should be maximum for utility
e. ε is unrelated
4. Consider the below values:
X = {x1,x2,.....xN} is the truth of an experiment
Y = {y1, y2,......yN} is the revealed values instead of the truth
To identify the average of truth, Y as an estimator cannot be used for the process by
which it was obtained. You derive new values Z where Z = {z1,z2,......zN} from Y which
are better estimators of X. How do you arrive at the values Z?
a. Removing the bias from Y introduced through the random process
b. Adding the bias to Y removed through the random process
c. Removing the variance from Y introduced through the random process
d. Adding the variance to Y removed through the random process
5. If ε is fixed, given a privacy guarantee, to improve the utility, which of the following values
can be modified?
a. Increase the number of experiments
b. Increase the amount of randomness
c. Increase the amount of bias introduced in the random process
d. Increase the amount of variance introduced in the random process
6. Identify the equation for the ε-differential mechanism (The notations used are the same
as used during the lecture): :
𝑃(𝑀(𝑥)Є 𝑆 ) ε
a.
𝑃(𝑀(𝑥')Є 𝑆 )
≤𝑒
𝑃(𝑀(𝑥')Є 𝑆 ) ε
b.
𝑃(𝑀(𝑥')Є 𝑆 )
≤𝑒
𝑃(𝑀(𝑥)Є 𝑆 ) ε
c.
𝑃(𝑀(𝑥)Є 𝑆 )
≤𝑒
𝑃(𝑀(𝑥)Є 𝑆 ) ε
d.
𝑃(𝑀(𝑥')Є 𝑆 )
≥𝑒
𝑃(𝑀(𝑥')Є 𝑆 ) ε
e.
𝑃(𝑀(𝑥')Є 𝑆 )
≥𝑒
𝑃(𝑀(𝑥)Є 𝑆 ) ε
f.
𝑃(𝑀(𝑥)Є 𝑆 )
≥𝑒
7. Identify the correct scenario in the case of differential privacy
a. Trust the curator; Trust the world
b. Do not trust the curator; Trust the world
c. Trust the curator; Do not trust the world
d. Do not trust the curator; Do not trust the world
8. Identify all the values representing sensitivity in a laplacian mechanism where the
function under consideration is an average of n binary values {0,1} (The notations used
are the same as used during the lecture).
1
a.
𝑛
1
b.
𝑛
| 𝑥𝑛' − 𝑥𝑛 |
c. ε
ε
d.
𝑛
−1
e. 𝑛
Δ
f.
ε
9. Identify the distribution from which the noise is derived in a laplacian mechanism. The
representation is of the form Laplacian(a,b) where a is the mean and b is the spread
parameter. (The notations used are the same as used during the lecture)
Δ
a. laplacian(1, )
ε
Δ
b. laplacian( ,0)
ε
Δ
c. laplacian( ,1)
ε
Δ
d. laplacian(0, )
ε
e. laplacian(1,1)
f. laplacian(0,0)
10. Higher privacy guarantees can be achieved in which of the following scenarios? Identify
all the possible scenarios.
a. Epsilon should be high
b. Inverse Sensitivity should be high
c. Variance should be high
d. Noise should be high
e. Utility should be high
11. Identify the deviation of the value from the truth in the scenario of a laplacian
mechanism. (The notations used are the same as used during the lecture).
1
a. o( )
ε𝑛
𝑛'
b. o( )
ε𝑛
ε
c. o( )
𝑛
𝑒
d. o( )
ε𝑛
e. o(ε𝑛)
12. In the scenario of a privacy-utility trade-off, for fixed privacy, the number of samples
required for a particular utility varies between the Laplacian mechanism and
Randomized response is different by what factor?
a. Constant factor
b. Linear factor
c. Exponential factor
d. Logarithmic factor
e. Quadratic factor
Assignment week 7
1. When calculating the sensitivity in ε-Differential Privacy where the values to be derived
from the data points is a d-dimension vector, identify the normalisation technique.
(Notations are the same as used in the lecture)
a. Manhattan normalisation
b. Eucledian normalisation
c. Max normalisation
d. Min-max normalisation
e. Sigmoid normalisation
2. In (ε, δ)- Differential privacy what does δ=0 imply? (Notations are the same as used in
the lecture)
ε
a. The equation (𝑃(𝑀(𝑥)ϵ 𝑆) ≤ 𝑒 (𝑃(𝑀(𝑥')ϵ 𝑆) should hold for some of the subsets
S
ε
b. The equation (𝑃(𝑀(𝑥)ϵ 𝑆) ≤ 𝑒 (𝑃(𝑀(𝑥')ϵ 𝑆) should hold for most of the subsets
S
ε
c. The equation (𝑃(𝑀(𝑥)ϵ 𝑆) ≤ 𝑒 (𝑃(𝑀(𝑥')ϵ 𝑆) should hold for all of the subsets S
ε
d. The equation (𝑃(𝑀(𝑥)ϵ 𝑆) ≤ 𝑒 (𝑃(𝑀(𝑥')ϵ 𝑆) should hold for none of the subsets
S
3. How do the utilities vary in the Laplacian mechanism vs the Gaussian mechanism in a
higher dimension differential privacy setting?
a. As the dimension increases, the Gaussian mechanism requires quadratically
more amount of noise than the Laplacian mechanism, decreasing the utility
b. As the dimension increases, the Gaussian mechanism requires quadratically
lesser amount of noise than the Laplacian mechanism, decreasing the utility
c. As the dimension increases, the Gaussian mechanism requires quadratically
lesser amount of noise than the Laplacian mechanism, increasing the utility
d. As the dimension increases, the Gaussian mechanism requires quadratically
more amount of noise than the Laplacian mechanism, increasing the utility
4. _____ property ensures that a function applied on the privacy-protected data _____ its
privacy aspect after applying a function over it.
a. i. Post-processing ii. Retains
b. i. Post-processing ii. Loses
c. i. Composition ii. Retains
d. i. Composition ii. Loses
5. After using k mechanisms for getting k (ε, δ)- differentially private data variations for a
dataset, the combined leakage that is observed from these k mechanisms can be
minimized by:
a. Using Laplacian Mechanism
b. Using Gaussian Mechanism
c. Using Uniform Mechanism
d. Using Exponential Mechanism
6. In a buyer-seller problem, given n buyers and n valuations by the buyers, what is the
total revenue given a price p.
𝑛
a. 𝑝 ∑ 𝐴 𝑤ℎ𝑒𝑟𝑒 𝐴 = 1 𝑖𝑓 𝑣𝑖 ≥ 𝑝 𝑎𝑛𝑑 𝐴 = 0 𝑖𝑓 𝑣𝑖 ≤ 𝑝
𝑖=𝑛
𝑛
b. 𝑝 ∑ 𝐴 𝑤ℎ𝑒𝑟𝑒 𝐴 = 0 𝑖𝑓 𝑣𝑖 ≥ 𝑝 𝑎𝑛𝑑 𝐴 = 1 𝑖𝑓 𝑣𝑖 ≤ 𝑝
𝑖=𝑛
c. 𝑝𝑛
d. 𝑝(𝑛 − 1)
e. 𝑝(1/𝑛)
7. In the exponential mechanism to calculate the price to maximize the revenue, identify the
correct statement in the scenario where 2 unequal prices result in the same revenue:
a. Both prices have an unequal probability of being selected
b. Both prices have an equal probability of being selected
c. A higher price has a higher probability of being chosen due to normalisation
d. A lower price has a higher probability of being chosen due to normalisation
11. In an ideal situation where the models are completely fair, the different parity values are:
a. Approach 0
b. 1
c. Approach 1
d. 0
1) Which of the following best describes the purpose of pixel attribution methods in image
classification by neural networks?
a) To increase the resolution of an image by modifying pixel values.
b) To highlight the pixels that were most relevant for the neural network's decision
in classifying an image.
c) To reduce the noise in an image by adjusting irrelevant pixels.
d) To segment the image into different regions based on pixel similarity.
2) Which of the following is NOT a name commonly associated with pixel attribution
methods?
a) Saliency map
b) Sensitivity map
c) Feature attribution
d) Convolution map
3) Which of the following statements is true regarding pixel attribution methods in image
classification?
a) SHAP and LIME are gradient-based methods that compute the gradient of the
prediction with respect to input features.
b) Gradient-based methods generate explanations by manipulating parts of the
image to see how it affects the classification.
c) Occlusion-based methods manipulate parts of the image, such as blocking or
altering pixels, to understand their influence on the model's decision.
d) All pixel attribution methods require model-specific adjustments to function
correctly.
5) What is the primary purpose of adding noise to the image in the Smooth Grad method?
a) To enhance the resolution of the image.
b) To create multiple variations for averaging pixel attribution maps.
c) To reduce the effect of irrelevant classes.
d) To increase the complexity of the gradient computation.
6) How does Guided BackProp differ from standard backpropagation in generating saliency
maps?
a) It only considers positive gradients by zeroing out negative activations and
gradients.
b) It back propagates gradients with all activations zeroed out.
c) It focuses on highlighting both negative and positive contributions.
d) It requires padding 1 to the image before backpropagation.
7) What does a lack of change in saliency maps after randomizing the layers indicate?
a) The saliency maps are highly accurate in reflecting the model's learning.
b) The saliency maps cannot be deceptive.
c) The saliency maps are unreliable and may not accurately capture the model’s
learned features.
d) The saliency maps provide detailed visualizations of the model's internal
mechanisms
9) What is the primary basis of SHAP (SHapley Additive exPlanations) for generating
explanations?
a) It employs a game theoretic approach to allocate credit and explain predictions.
b) It uses a neural network to generate explanations based on model weights.
c) It applies statistical sampling methods to estimate the importance of features.
d) It utilizes clustering techniques to group similar data points for explanation.
10) How do ProtoPNet models determine which patches are most important for
classification?
a) By evaluating the overall texture patterns of images.
b) By using statistical correlation between different patches of images.
c) By identifying and using patches that are representative or prototypical of each
class.
d) By performing dimensionality reduction on the image data to find key features.
11) Why is probing important even when a model shows strong performance on a task?
a) To check if the model is using irrelevant data for making predictions.
b) To verify if the model's high accuracy is due to performing specific subtasks
effectively.
c) To understand whether the model is overfitting to the training data.
d) To determine the computational efficiency of the model during training and
inference.
2. Which of the following statement is true regarding the development and implementation
of AI systems?
a. Policy considerations and technical details are equally important
b. Technical details alone are sufficient for effective AI development
c. Policy considerations are only important in few countries
d. AI systems do not require any policy or ethical considerations
3. Based on the lecture content: Which of the following statements about AGI is
appropriate?
a. Learning new skills on its own and having emotional intelligence can be
characteristics of AGI
b. AGI cannot learn new skills independently and lacks emotional intelligence
c. AGI is limited to performing specific tasks and does not require emotional
intelligence
d. AGI only focuses on technical problem-solving without any consideration of
emotional aspects
4. Based on the lecture content: From a mathematical perspective, which of the following is
not considered a major problem in AI today, compared among others?
a. Explainability
b. Hallucinations
c. Data privacy
d. Bias
5. Which of the following statements best reflects the significance of information rights?
a. The right to inclusion of information and the right to exclusion of information are
both important
b. The right to exclusion of information is more important
c. The right to inclusion of information is irrelevant compared to other rights
d. The right to inclusion is important but not right to exclusion
6. Which of the following statements is true regarding checking for copyrighted information
in black-box models?
a. There is no way to check whether models have copyrighted information in
black-box models
b. All black-box models provide transparency for verifying copyrighted information
c. Copyright information can be easily extracted from black-box models
d. Black-box models disclose their training data for copyright verification
8. What role does the government play in the regulation and deployment of large language
models (LLMs)?
a. Strict regulation is provided by the government, which issues licenses to deploy
or use LLMs
b. The government does not regulate LLMs and leaves all oversight to private
companies
c. The government only provides financial support for LLM development without any
regulatory role
d. The government encourages unrestricted use of LLMs without any form of
licensing
9. Which of the following is a notable drawback of AI?
a. Environmental effects, such as high water and energy consumption
b. Less accuracy in predictions and results
c. Usage of transformers in models
d. Consuming more training time
11. Which of the following are reasons for the delays in AI regulation?
a. Lack of domain expertise
b. Challenge of regulating bad without compromising good
c. Lack of funding
d. Political pressures
e. Overabundance of regulations already in place
Week 10
1. Which of the following are challenges faced by AI in the current era? (Select all that
apply)
a. AI systems being biased
b. AI models requiring zero human intervention
c. AI systems being completely unbiased
d. AI models being trained on harmful data
e. AI systems always making ethical decisions
f. AI systems using transformer models
3. Which of the following best describes one of the possible definitions of AGI?
a. AI systems that are limited to specific tasks
b. AI models trained for basic automation
c. AI systems surpassing human intelligence.
d. AI systems which show high training accuracy
4. Which of the following are potential challenges associated with AGI? (Select all that
apply)
a. Chaotic power struggles
b. Utilization for selfish and short-term objectives
c. Guaranteed long-term global stability
d. Universal agreement on AGI’s ethical use
e. Potential misuse by a few for personal gain
f. Complete elimination of bias in decision-making
7. What are the primary focuses of the EU AI Act? (Select all that apply)
a. Regulating the use of AI to ensure safety, transparency, and accountability
b. Banning all forms of AI development in Europe
c. Promoting the unregulated use of AI across all sectors
d. Requiring all AI systems to be open-source and publicly accessible
e. Banning systems with cognitive behavioral manipulation of people or specific
vulnerable groups
10. What is one challenge even after training a model on unbiased data?
a. Implicit bias may still exist in the model
b. No challenge, model will be completely free of any bias
c. The model will exhibit perfect performance in all scenarios
d. The model will have issues related to data processing
11. Among the following, which type of biased system is considered particularly harmful?
a. Decision systems
b. Recommendation systems
c. Entertainment systems
d. Weather forecasting systems
Assignment 11
1. In the context of the paper SaGE, what is semantic consistency?
a. Semantically equivalent questions should yield semantically equivalent answers
b. Semantically equivalent questions should yield same answers
c. Same questions should yield same answers
d. Same questions should yield semantically equivalent answers
3. What metric was used to determine the quality of the paraphrase of the questions in the
SaGE paper?
a. BERTScore
b. Parascore
c. Jaccard Similarity
d. Cosine Similarity
5. Identify the statements that are TRUE with respect to the current LLMs.
a. LLMs are not consistent in their generation
b. A good accuracy on benchmark datasets correlates with high consistency
c. LLMs are consistent in their generation
d. A good accuracy on benchmark datasets does not correlate with high
consistency
9. As discussed in the lecture, When you have domain specific task, what kind of finetuning
is preferred? Identify all the correct methods.
a. Full-model finetuning
b. Layer-specific finetuning
c. Head-level finetuning
d. Retraining
2. Which of the following tasks do Graph Neural Networks (GNNs) typically struggle with?
a. Node classification
b. Link prediction
c. Cycle detection
d. Graph clustering
4. What does the acronym FORGE stand for in the context of graph learning?
a. Framework for Higher-Optimized Representation in Graph Environments
b. Framework for Higher-Order Representations in Graph Explanations
c. Functional Optimization for Regular Graph Embeddings
d. Fast Operational Research for Graph Equations
6. Based on the lecture content: What can the boundary relation be loosely translated to in
graph theory?
a. Nodes
b. Edges
c. Faces
d. Weights