C950 Data Structures and Algorithms II
Rated 4.8/5 from over 1000+ reviews
- Unlimited Exact Practice Test Questions
- Trusted By 200 Million Students and Professors
What’s Included:
- Unlock 100 + Actual Exam Questions and Answers for C950 Data Structures and Algorithms II on monthly basis
- Well-structured questions covering all topics, accompanied by organized images.
- Learn from mistakes with detailed answer explanations.
- Easy To understand explanations for all students.

Free C950 Data Structures and Algorithms II Questions
A scheme to organize data of varied types so that data can be efficiently entered, stored, sorted, retrieved, and manipulated by a computer program is known as a
-
Data structure
-
Looping control structure
-
Decision control structure
-
Sequential control structure
Explanation
Correct Answer
A. Data structure
Explanation
A data structure is a scheme used to organize, store, and manipulate data efficiently. Data structures enable the efficient entry, storage, retrieval, sorting, and manipulation of data in computer programs, depending on the needs of the application.
Why other options are wrong
B. Looping control structure
Looping control structures are used to repeat certain operations in a program but do not focus on organizing or managing data. Examples include for, while, and do-while loops.
C. Decision control structure
Decision control structures are used to make decisions in a program (such as if, else, and switch statements) but do not involve organizing or managing data in the way data structures do.
D. Sequential control structure
A sequential control structure simply refers to the flow of execution from one instruction to the next, without branching or looping. It does not relate to organizing or managing data efficiently.
Explain how greedy algorithms differ from other algorithm design paradigms in terms of decision-making during problem-solving.
-
Greedy algorithms consider all possible solutions before making a decision.
-
Greedy algorithms make the best local choice at each step without reconsidering previous choices.
-
Greedy algorithms always guarantee the optimal solution.
-
Greedy algorithms require a complete search of the solution space.
Explanation
Correct Answer
B. Greedy algorithms make the best local choice at each step without reconsidering previous choices.
Explanation
Greedy algorithms make a series of decisions by choosing the locally optimal solution at each step, aiming to find a global optimum. These decisions are made based on immediate gains or benefits, without revisiting or revising previous choices. The key characteristic is that greedy algorithms do not backtrack or reconsider previous steps. They are often simpler and faster but do not always guarantee an optimal solution.
Why other options are wrong
A. Greedy algorithms consider all possible solutions before making a decision.
This is incorrect because greedy algorithms do not consider all possible solutions upfront. Instead, they make decisions based on immediate local benefits.
C. Greedy algorithms always guarantee the optimal solution.
Greedy algorithms do not always guarantee the optimal solution. In some cases, a greedy approach may lead to suboptimal results, especially when the problem requires a more complex decision-making process.
D. Greedy algorithms require a complete search of the solution space.
Greedy algorithms do not require a complete search of the solution space. In fact, one of their strengths is that they make decisions step by step, without needing to explore all possibilities, which makes them more efficient for certain types of problems.
What is the term used to describe the algorithm design paradigm where a problem is divided into smaller, more manageable subproblems?
-
Dynamic programming
-
Greedy algorithms
-
Divide and conquer
-
Backtracking
Explanation
Correct Answer
C. Divide and conquer
Explanation
Divide and conquer is an algorithmic paradigm that breaks a problem down into smaller subproblems that are easier to solve. Once these subproblems are solved, their solutions are combined to form the solution to the original problem. Classic examples of divide and conquer algorithms include merge sort, quicksort, and binary search.
Why other options are wrong
A. Dynamic programming
Dynamic programming is an approach that solves problems by breaking them down into overlapping subproblems and storing the results to avoid redundant computations. It’s similar to divide and conquer, but it focuses on overlapping subproblems rather than independent ones.
B. Greedy algorithms
Greedy algorithms make a sequence of choices, each of which looks the best at the moment, and they do not involve dividing a problem into smaller subproblems.
D. Backtracking
Backtracking is a technique for finding solutions to problems incrementally, trying out different possibilities and backtracking when a solution path is not valid. It is not based on dividing a problem into subproblems.
Which of the following is a benefit of using abstract data types (ADTs)?
-
They reduce the amount of code needed.
-
They improve code clarity.
-
They eliminate the need for algorithms.
-
They require less memory.
Explanation
Correct Answer
B. They improve code clarity.
Explanation
The use of abstract data types (ADTs) helps to separate the implementation details from the interface, making the code more modular and easier to understand. ADTs define the type of data stored and the operations that can be performed on that data, improving code clarity and reducing complexity. By focusing on the behavior rather than implementation, ADTs provide a more intuitive way to design and manage data structures.
Why other options are wrong
A. They reduce the amount of code needed.
While ADTs can lead to cleaner and more modular code, they do not necessarily reduce the total amount of code required. The abstraction may simplify the design, but it might not result in fewer lines of code overall.
C. They eliminate the need for algorithms.
ADTs do not eliminate the need for algorithms. In fact, algorithms are often designed to work on ADTs. The abstraction of the data structure does not remove the need for logic to manipulate and process the data.
D. They require less memory.
ADTs themselves do not inherently require less memory. The memory usage depends on the specific implementation of the ADT. The focus of ADTs is on abstraction and usability rather than memory optimization.
What is the running time of the heap sort algorithm expressed in terms of big-O asymptotic notation?
-
O(n)
-
O(nlog2n)
-
O(n^2)
Explanation
Correct Answer
B. O(nlog2n)
Explanation
Heap sort operates by first building a max-heap (or min-heap) from the input data, which takes O(n) time. Then, the algorithm repeatedly extracts the maximum (or minimum) element from the heap and re-heapifies the remaining elements, which takes O(log n) time for each extraction. Since there are n elements, the overall time complexity is O(nlog n).
Why other options are wrong
A. O(n)
This is incorrect because although the heap can be built in O(n) time, the subsequent steps of extracting elements from the heap involve O(log n) operations, making the overall time complexity O(nlog n).
C. O(n^2)
This is incorrect because heap sort does not have a time complexity of O(n^2). That would apply to algorithms like bubble sort or insertion sort, but heap sort runs in O(nlog n) time.
The Collection class or interface that allows only unique elements is the ___________ class or interface.
-
Set
-
List
-
Vector
-
all of the above
Explanation
Correct Answer
A. Set
Explanation
A Set is a collection that does not allow duplicate elements. Each element in a Set is unique, and if an attempt is made to add a duplicate element, the collection will not include it. This is in contrast to other collections like List<T> or Vector<T>, where duplicates are allowed.
Why other options are wrong
B. List
A List<T> allows duplicates, meaning that multiple occurrences of the same element can exist in the list.
C. Vector
A Vector<T> is a type of dynamic array that allows duplicates, much like a List<T>. It does not enforce uniqueness of elements.
D. all of the above
This is incorrect because only the Set<T> collection ensures that all elements are unique. Both List<T> and Vector<T> allow duplicates.
Suppose that the sequence is implemented with an array. Which of these operations are likely to have a constant worst-case time?
-
addBefore
-
countOccurrences
-
remove
-
None of (A), (B), and (C) have a constant worst-case time
-
Two of (A), (B), and (C) have a constant worst-case time
-
All of (A), (B), and (C) have a constant worst-case time
Explanation
Correct Answer
D. None of (A), (B), and (C) have a constant worst-case time
Explanation
When a sequence is implemented using an array, operations like addBefore, countOccurrences, and remove usually involve iterating through the array, which can have linear time complexity. These operations do not generally have a constant worst-case time. For example, remove may require shifting elements in the array, and countOccurrences may require scanning through all the elements.
Why other options are wrong
A. addBefore
This is incorrect because adding an element before another element in an array requires shifting elements, resulting in O(n) time complexity, not constant time.
B. countOccurrences
This is incorrect because counting occurrences of a value involves scanning the entire array, leading to O(n) time complexity, which is not constant.
C. remove
This is incorrect because removing an element from an array requires shifting subsequent elements to fill the gap, which takes O(n) time, not constant time.
E. Two of (A), (B), and (C) have a constant worst-case time
This is incorrect because none of the operations have constant worst-case time.
F. All of (A), (B), and (C) have a constant worst-case time
This is incorrect because none of the operations exhibit constant worst-case time complexity.
In the Divide and Conquer Process, breaking the problems into smaller sub-problems is the responsibility of?
-
Divide/Break
-
Sorting/Divide
-
Conquer/Solve
-
Merge/Combine
Explanation
Correct Answer
A. Divide/Break
Explanation
In the Divide and Conquer algorithm design paradigm, the process of dividing the problem into smaller, manageable sub-problems is referred to as the "Divide" or "Break" step. This step is essential because it reduces the original complex problem into simpler components that can be tackled more efficiently. Once divided, each smaller problem is solved individually, usually recursively.
Why other options are wrong
B. Sorting/Divide
While sorting is often part of divide and conquer algorithms (like in quicksort or mergesort), "Sorting" specifically does not refer to the process of dividing the problem. The division of the problem is separate from sorting.
C. Conquer/Solve
"Conquer" refers to the step where the sub-problems are solved, not the process of dividing them. After the division, each sub-problem is individually solved.
D. Merge/Combine
"Merge" or "Combine" is the step where the solutions to the sub-problems are combined to form the solution to the original problem, but this does not involve breaking the problem into smaller parts.
What is the correct way to find the minimum element in an array b with 'size' elements using a generic algorithm?
-
min = fsu::g_min_element(b, b + size_);
-
min = fsu::g_max_element(b, b + size_);
-
min = fsu::g_find_element(b, b + size_);
-
min = fsu::g_sort_element(b, b + size_);
Explanation
Correct Answer
A. min = fsu::g_min_element(b, b + size_);
Explanation
To find the minimum element in an array, the g_min_element algorithm is used, which efficiently finds the smallest element within the specified range. In this case, b is the beginning of the array and b + size_ is the end of the range (one past the last element). This function returns an iterator to the smallest element in the range.
Why other options are wrong
B. min = fsu::g_max_element(b, b + size_);
This option is incorrect because g_max_element finds the maximum element, not the minimum. The function you're looking for is g_min_element to find the minimum value.
C. min = fsu::g_find_element(b, b + size_);
This is not correct because g_find_element is typically used to find a specific element in a range, not the minimum. It does not provide the functionality to find the minimum value.
D. min = fsu::g_sort_element(b, b + size_);
This option is incorrect because g_sort_element is used for sorting the elements, not for finding the minimum. Sorting the array first would be inefficient for finding the minimum since it takes more time than simply applying g_min_element.
Identify the asymptotic runtime for the following sorting algorithm: g_merge_sort.
-
Θ(n)
-
Θ(n²)
-
Θ(n log n)
-
Θ(log n)
Explanation
Correct Answer
C. Θ(n log n)
Explanation
Merge sort is a classic divide-and-conquer sorting algorithm. It works by recursively dividing the list into halves, sorting each half, and then merging them. The splitting process takes log(n) time (number of levels of recursion), and merging all sublists at each level takes linear time. Therefore, the total runtime is Θ(n log n), which holds for best, average, and worst cases.
Why other options are wrong
A. Θ(n)
This is incorrect because merge sort always involves dividing the input and merging it back together, even in the best case. There is no optimization in merge sort that allows it to complete in linear time, unlike algorithms like counting sort under specific conditions.
B. Θ(n²)
This is wrong because Θ(n²) is characteristic of less efficient sorting algorithms like bubble sort or insertion sort in their worst-case scenarios. Merge sort avoids this by consistently breaking down the problem and recombining in linear time across log(n) levels.
D. Θ(log n)
This is incorrect as Θ(log n) would suggest a sublinear time complexity, which is impossible for sorting algorithms that must examine each element at least once. Sorting n items in less than linear time contradicts basic computational complexity constraints.
How to Order
Select Your Exam
Click on your desired exam to open its dedicated page with resources like practice questions, flashcards, and study guides.Choose what to focus on, Your selected exam is saved for quick access Once you log in.
Subscribe
Hit the Subscribe button on the platform. With your subscription, you will enjoy unlimited access to all practice questions and resources for a full 1-month period. After the month has elapsed, you can choose to resubscribe to continue benefiting from our comprehensive exam preparation tools and resources.
Pay and unlock the practice Questions
Once your payment is processed, you’ll immediately unlock access to all practice questions tailored to your selected exam for 1 month .