Computer Science (9618)
Topic 13 of 17Cambridge A Levels

Computational Thinking & Algorithms

Problem decomposition, abstraction, sorting and searching algorithms, complexity

Introduction to Computational Thinking


Welcome, SeekhoAsaan learners, to a crucial topic in Computer Science: Computational Thinking! This isn't just about coding; it's about thinking like a computer scientist to solve problems, even those that don't involve computers directly. Imagine you want to build a house, plan a cricket match, or manage a busy market. Computational thinking gives you the tools to approach these challenges systematically and efficiently.


At its core, Computational Thinking is a set of problem-solving techniques that involve four key pillars:

  1. Decomposition: Breaking down a complex problem into smaller, more manageable parts.
  2. Pattern Recognition: Finding similarities, trends, or common characteristics in problems or data.
  3. Abstraction: Focusing on the important details and ignoring the irrelevant ones to create a general model.
  4. Algorithms: Developing step-by-step instructions or rules to solve a problem.

Today, we'll dive deep into decomposition, abstraction, and the fascinating world of algorithms, including how we measure their efficiency using complexity.


Problem Decomposition


Imagine you're asked to prepare for Eid-ul-Fitr celebrations at home. This is a huge task! If you try to do everything at once, you'll feel overwhelmed. This is where Problem Decomposition comes in. Decomposition is the process of breaking down a large, complex problem into smaller, more manageable sub-problems. Each sub-problem can then be solved independently, and once all sub-problems are solved, their solutions can be combined to solve the original big problem.


Why is Decomposition Important?

* Simplifies Complexity: Makes daunting tasks seem less intimidating.

* Increases Manageability: Easier to assign parts to different people or focus on one part at a time.

* Easier Debugging: If something goes wrong, you can pinpoint the exact sub-problem causing the issue.

* Reusability: Solutions to sub-problems might be useful for other problems too.


How to Apply Decomposition:

Think about a "divide and conquer" strategy. Keep breaking down problems until each sub-problem is simple enough to be solved directly.


Worked Example: Planning a Cricket Tournament in Lahore


Let's say the Pakistan Cricket Board (PCB) wants to organize a major inter-city cricket tournament in Lahore, involving 16 teams. This is a massive undertaking!


Original Problem: Organize the 'SeekhoAsaan Premier League' cricket tournament.


Decomposition Steps:


  1. Major Sub-problems:

* Tournament Structure & Rules

* Venue Management

* Team & Player Registration

* Scheduling Matches

* Logistics (Transport, Accommodation)

* Marketing & Ticketing

* Financial Management

* Broadcasting & Media


  1. Further Decomposition (e.g., Venue Management):

* Book Gaddafi Stadium and practice grounds.

* Ensure pitch preparation.

* Arrange security personnel.

* Set up medical facilities.

* Manage spectator seating and amenities.


  1. Further Decomposition (e.g., Scheduling Matches):

* Determine match format (e.g., round-robin + knockouts).

* Create fixture list.

* Assign umpires and match referees.

* Manage rain delays and reserve days.


By breaking it down, each team within PCB can handle a specific set of tasks, making the entire tournament manageable.


Abstraction


After breaking down a problem, you often find yourself dealing with many details. Some are crucial, others are not. Abstraction is the process of filtering out or ignoring the unnecessary details to focus on the essential information. It allows us to create a general representation of a system or problem, making it easier to understand and work with.


Think about a map. A map is an abstraction of a real geographical area. It shows you roads, cities, and landmarks, but it deliberately leaves out details like individual trees, specific buildings, or the texture of the asphalt. These details are irrelevant for navigation.


Why is Abstraction Important?

* Manages Complexity: By simplifying reality, it makes complex systems easier to reason about.

* Focuses on the Core: Helps you concentrate on what truly matters for solving the problem.

* Creates General Models: A good abstraction can be applied to many similar situations.

* Improves Communication: Easier to discuss high-level ideas without getting bogged down in specifics.


Levels of Abstraction:

* High-level Abstraction: Presents a general overview, hiding most details (e.g., the concept of a 'car').

* Low-level Abstraction: Provides more specific details, but still abstracts away some underlying complexity (e.g., the engine of a car, without worrying about individual electrons).


Worked Example: Designing a WAPDA Electricity Billing System


Imagine WAPDA wants to create a new software system to calculate and issue electricity bills for its customers across Pakistan.


Initial thought (too many details): "We need to get meter readings, check wiring, see if the customer lives in DHA or Gulshan-e-Iqbal, then calculate taxes, apply discounts, print the bill on specific paper, and send it by post or email, and handle complaints, and..."


Applying Abstraction:


  1. Identify the Core Functionality (High-level Abstraction):

* Customer Account Management

* Usage Data Collection

* Bill Calculation

* Bill Generation & Delivery

* Payment Processing


  1. Abstracting "Customer Account": Instead of worrying about *every single* detail of a customer's life, we abstract a customer as an object with essential attributes:

* `CustomerID`

* `Name`

* `Address`

* `MeterNumber`

* `TariffType` (e.g., residential, commercial, industrial)

* `PreviousReading`

* `CurrentReading`


Details like their favourite colour, profession (unless it impacts tariff), or family size are irrelevant to billing and are abstracted away.


  1. Abstracting "Bill Calculation": This becomes a function that takes `CurrentReading`, `PreviousReading`, and `TariffType` as inputs. The internal complexity of calculating units consumed, applying unit rates, taxes, and fuel price adjustments (FPA) is hidden within this function. The system only cares about the final `BillAmount` output.

By using abstraction, the developers can focus on building modular components for each abstracted part, making the system manageable and robust, regardless of whether a customer is in Karachi or Gilgit.


Algorithms


An algorithm is a precise, unambiguous, step-by-step set of instructions to solve a problem or accomplish a specific task. Think of it as a recipe for a computer. Just like a recipe tells you exactly how to make biryani, an algorithm tells a computer exactly how to perform a task.


Key Characteristics of a Good Algorithm:

* Input: It must accept zero or more inputs.

* Output: It must produce at least one output.

* Definiteness: Each step must be clear and unambiguous.

* Finiteness: It must terminate after a finite number of steps.

* Effectiveness: Each step must be sufficiently basic to be executable (e.g., not an infinite loop).


Algorithms are often represented using pseudocode (a high-level, language-independent description) or flowcharts (diagrams using standard symbols).


Sorting Algorithms


Sorting is the process of arranging a collection of items (like numbers, names, or objects) in a specific order, such as ascending or descending. This is incredibly useful for making data easier to find and analyze.


#### Bubble Sort


Bubble Sort is one of the simplest sorting algorithms. It works by repeatedly stepping through the list, comparing adjacent elements, and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted.


How it Works:

  1. Start at the beginning of the list.
  2. Compare the first two elements. If the first is greater than the second (for ascending order), swap them.
  3. Move to the next pair of elements and repeat step 2.
  4. Continue this process until you reach the end of the list. After the first pass, the largest (or smallest) element will have "bubbled" to its correct position at the end.
  5. Repeat the entire process for the remaining unsorted portion of the list, excluding the element(s) already in place.
  6. Continue until no swaps are made in a complete pass, meaning the list is sorted.

Example Trace (Ascending): `[5, 1, 4, 2, 8]`

* Pass 1:

* (5, 1) -> `[1, 5, 4, 2, 8]` (Swap)

* (5, 4) -> `[1, 4, 5, 2, 8]` (Swap)

* (5, 2) -> `[1, 4, 2, 5, 8]` (Swap)

* (5, 8) -> `[1, 4, 2, 5, 8]` (No swap) -> `8` is in place.

* Pass 2:

* (1, 4) -> `[1, 4, 2, 5, 8]` (No swap)

* (4, 2) -> `[1, 2, 4, 5, 8]` (Swap)

* (4, 5) -> `[1, 2, 4, 5, 8]` (No swap) -> `5` is in place.

* Pass 3:

* (1, 2) -> `[1, 2, 4, 5, 8]` (No swap)

* (2, 4) -> `[1, 2, 4, 5, 8]` (No swap) -> `4` is in place.

* Pass 4: No swaps needed. The list is sorted: `[1, 2, 4, 5, 8]`


Pseudocode:

```pseudocode

PROCEDURE BubbleSort(list A)

n = length of A

REPEAT

swapped = FALSE

FOR i FROM 0 TO n-2

IF A[i] > A[i+1] THEN

Swap A[i] and A[i+1]

swapped = TRUE

END IF

END FOR

n = n - 1 // The last element is now sorted

UNTIL swapped = FALSE

END PROCEDURE

```


Complexity:

* Time Complexity (Worst/Average Case): `O(n^2)`. This occurs when the list is in reverse order or elements are far from their sorted positions. For `n` elements, there are roughly `n` passes, and each pass compares `n` elements.

* Time Complexity (Best Case): `O(n)`. This occurs if the list is already sorted. The algorithm will only need one pass to confirm no swaps are needed.

* Space Complexity: `O(1)`. It sorts in place, requiring only a constant amount of extra memory for temporary variables.


#### Insertion Sort


Insertion Sort builds the final sorted array (or list) one item at a time. It iterates through the input elements and at each iteration, it removes one element from the input, finds the location it belongs within the sorted list, and inserts it there. It's like sorting a hand of playing cards.


How it Works:

  1. Consider the first element to be sorted (a list of 1 element is always sorted).
  2. Take the next element from the unsorted part.
  3. Compare this element with elements in the sorted part, moving from right to left.
  4. Shift elements in the sorted part that are greater than the current element one position to the right.
  5. Insert the current element into the correct position.
  6. Repeat steps 2-5 until all elements are sorted.

Example Trace (Ascending): `[5, 1, 4, 2, 8]`

* `[5 | 1, 4, 2, 8]` (5 is sorted)

* Take `1`. Compare `1` with `5`. `1 < 5`. Shift `5` right. Insert `1`.

`[1, 5 | 4, 2, 8]`

* Take `4`. Compare `4` with `5`. `4 < 5`. Shift `5` right. Compare `4` with `1`. `4 > 1`. Insert `4`.

`[1, 4, 5 | 2, 8]`

* Take `2`. Compare `2` with `5`. `2 < 5`. Shift `5` right. Compare `2` with `4`. `2 < 4`. Shift `4` right. Compare `2` with `1`. `2 > 1`. Insert `2`.

`[1, 2, 4, 5 | 8]`

* Take `8`. Compare `8` with `5`. `8 > 5`. Insert `8`.

`[1, 2, 4, 5, 8 | ]` (Sorted)


Pseudocode:

```pseudocode

PROCEDURE InsertionSort(list A)

n = length of A

FOR i FROM 1 TO n-1 // Start from the second element

currentValue = A[i]

position = i

WHILE position > 0 AND A[position-1] > currentValue

A[position] = A[position-1] // Shift element to the right

position = position - 1

END WHILE

A[position] = currentValue // Insert currentValue into correct place

END FOR

END PROCEDURE

```


Complexity:

* Time Complexity (Worst/Average Case): `O(n^2)`. This happens when the array is in reverse order. For each element, you might have to shift all previous elements.

* Time Complexity (Best Case): `O(n)`. This occurs when the array is already sorted. Each element is compared only once and no shifts are needed.

* Space Complexity: `O(1)`. It sorts in place.


#### Merge Sort


Merge Sort is a highly efficient, comparison-based sorting algorithm based on the divide and conquer paradigm. It works by recursively dividing the unsorted list into `n` sublists, each containing one element (a list of one element is considered sorted), and then repeatedly merging sublists to produce new sorted sublists until there is only one sorted list remaining.


How it Works (Divide and Conquer):

  1. Divide: Recursively break down the list into two halves until you have sublists of one element each.
  2. Conquer (Merge): Repeatedly merge these sublists to produce new sorted sublists. The merging process involves taking two sorted sublists and combining them into a single sorted list.

Example Trace (Ascending): `[38, 27, 43, 3, 9, 82, 10]`


* Divide:

* `[38, 27, 43, 3]`, `[9, 82, 10]`

* `[38, 27]`, `[43, 3]`, `[9, 82]`, `[10]`

* `[38]`, `[27]`, `[43]`, `[3]`, `[9]`, `[82]`, `[10]` (Base cases: single elements are sorted)


* Conquer (Merge):

* Merge `[38]` and `[27]` -> `[27, 38]`

* Merge `[43]` and `[3]` -> `[3, 43]`

* Merge `[9]` and `[82]` -> `[9, 82]`

* Merge `[10]` -> `[10]`


* Merge `[27, 38]` and `[3, 43]` -> `[3, 27, 38, 43]`

* Merge `[9, 82]` and `[10]` -> `[9, 10, 82]`


* Merge `[3, 27, 38, 43]` and `[9, 10, 82]` -> `[3, 9, 10, 27, 38, 43, 82]` (Sorted)


Pseudocode:

```pseudocode

PROCEDURE MergeSort(list A)

n = length of A

IF n <= 1 THEN // Base case: already sorted or empty

RETURN A

END IF


mid = n / 2

left_half = A[0...mid-1]

right_half = A[mid...n-1]


sorted_left = MergeSort(left_half)

sorted_right = MergeSort(right_half)


RETURN Merge(sorted_left, sorted_right)

END PROCEDURE


PROCEDURE Merge(list L1, list L2)

result = empty list

i = 0, j = 0


WHILE i < length of L1 AND j < length of L2

IF L1[i] <= L2[j] THEN

Add L1[i] to result

i = i + 1

ELSE

Add L2[j] to result

j = j + 1

END IF

END WHILE


// Add remaining elements (if any)

WHILE i < length of L1

Add L1[i] to result

i = i + 1

END WHILE

WHILE j < length of L2

Add L2[j] to result

j = j + 1

END WHILE


RETURN result

END PROCEDURE

```


Complexity:

* Time Complexity (Worst/Average/Best Case): `O(n log n)`. The `log n` comes from repeatedly dividing the list in half. The `n` comes from merging, as each element is processed once per merge level. This makes it much more efficient for large datasets than `O(n^2)` algorithms.

* Space Complexity: `O(n)`. Merge Sort typically requires an auxiliary array for merging, which can be up to the size of the original array.


#### Comparison of Sorting Algorithms


| Algorithm | Best Case Time | Average Case Time | Worst Case Time | Space Complexity | Notes |

| :------------ | :------------- | :---------------- | :-------------- | :--------------- | :------------------------------------------ |

| Bubble Sort | `O(n)` | `O(n^2)` | `O(n^2)` | `O(1)` | Simple, but inefficient for large lists. |

| Insertion Sort| `O(n)` | `O(n^2)` | `O(n^2)` | `O(1)` | Efficient for small lists or nearly sorted. |

| Merge Sort | `O(n log n)` | `O(n log n)` | `O(n log n)` | `O(n)` | Generally efficient and stable. |


Searching Algorithms


Searching is the process of finding a specific item (or items) within a collection of items. This is a fundamental operation in computer science, from finding a file on your hard drive to looking up a contact in your phone.


#### Linear Search (Sequential Search)


Linear Search is the simplest searching algorithm. It sequentially checks each element of the list until a match is found or the end of the list is reached.


How it Works:

  1. Start from the first element of the list.
  2. Compare the current element with the target value (the item you're looking for).
  3. If they match, the search is successful, and you return the position/index of the element.
  4. If they don't match, move to the next element and repeat step 2.
  5. If you reach the end of the list and no match is found, the target value is not in the list.

Example Trace: List `[10, 50, 30, 70, 80, 20, 60, 90]`, Target `30`

* Compare `10` with `30` -> No match.

* Compare `50` with `30` -> No match.

* Compare `30` with `30` -> Match found at index 2. Return 2.


Pseudocode:

```pseudocode

PROCEDURE LinearSearch(list A, target_value)

n = length of A

FOR i FROM 0 TO n-1

IF A[i] = target_value THEN

RETURN i // Return index where target is found

END IF

END FOR

RETURN -1 // Target not found

END PROCEDURE

```


Complexity:

* Time Complexity (Worst Case): `O(n)`. This occurs when the target is the last element or not in the list at all. You have to check every element.

* Time Complexity (Average Case): `O(n)`. On average, you'd find it in the middle.

* Time Complexity (Best Case): `O(1)`. This occurs if the target is the first element.

* Space Complexity: `O(1)`. It only uses a constant amount of extra memory.


#### Binary Search


Binary Search is a highly efficient searching algorithm, but it has a crucial prerequisite: the list must be sorted.


How it Works:

  1. Find the middle element of the sorted list.
  2. Compare the middle element with the target value.
  3. If they match, the search is successful.
  4. If the target value is smaller than the middle element, discard the right half of the list and repeat the process on the left half.
  5. If the target value is larger than the middle element, discard the left half of the list and repeat the process on the right half.
  6. Continue until the target is found or the sublist becomes empty (meaning the target is not present).

This repeatedly halves the search interval, making it very fast for large datasets.


Worked Example: Finding a Student in a Sorted Class List (Karachi School)


Imagine a school in Karachi has a sorted list of student IDs for a class of 100 students: `[101, 105, 112, ..., 500]`. We want to find student ID `300`.


Let's use a smaller example: `[10, 20, 30, 40, 50, 60, 70, 80, 90]` (Indices 0-8). Target: `70`


* Step 1: `low = 0`, `high = 8`. `mid = (0 + 8) / 2 = 4`. Element at `mid` is `50`.

`70 > 50`, so we search in the right half. `low` becomes `mid + 1 = 5`.

New search range: `[60, 70, 80, 90]` (Indices 5-8).


* Step 2: `low = 5`, `high = 8`. `mid = (5 + 8) / 2 = 6` (integer division). Element at `mid` is `70`.

`70 == 70`. Match found at index `6`. Return `6`.


Pseudocode:

```pseudocode

PROCEDURE BinarySearch(list A, target_value)

low = 0

high = length of A - 1


WHILE low <= high

mid = (low + high) DIV 2 // Integer division


IF A[mid] = target_value THEN

RETURN mid

ELSE IF A[mid] < target_value THEN

low = mid + 1 // Search in the right half

ELSE // A[mid] > target_value

high = mid - 1 // Search in the left half

END IF

END WHILE


RETURN -1 // Target not found

END PROCEDURE

```


Complexity:

* Time Complexity (Worst/Average/Best Case): `O(log n)`. Because the search space is halved in each step, the number of operations grows logarithmically with the size of the list. This is extremely efficient for large lists.

* Space Complexity: `O(1)` for iterative implementation, `O(log n)` for recursive implementation due to call stack. We focus on the iterative version for `O(1)` here.


#### Comparison of Searching Algorithms


| Algorithm | Prerequisite | Best Case Time | Average Case Time | Worst Case Time | Space Complexity | Notes |

| :------------ | :--------------- | :------------- | :---------------- | :-------------- | :--------------- | :-------------------------------------------- |

| Linear Search | None | `O(1)` | `O(n)` | `O(n)` | `O(1)` | Simple, works on unsorted lists, slow for large. |

| Binary Search | List must be sorted | `O(1)` | `O(log n)` | `O(log n)` | `O(1)` | Fast for large sorted lists, requires sorting. |


Algorithmic Complexity (Big O Notation)


When we talk about an algorithm's complexity, we're measuring how its performance (time or space) scales with the size of the input. We don't measure performance in seconds or megabytes, because these can vary wildly based on the computer's speed, programming language, etc. Instead, we use a theoretical measure known as Big O Notation.


Big O Notation describes the upper bound of an algorithm's growth rate. It tells us how the runtime or space requirements grow in the worst-case scenario as the input size (`n`) gets very large. It ignores constant factors and lower-order terms because they become insignificant for large `n`.


For example, an algorithm that takes `2n + 5` operations is considered `O(n)` because for large `n`, the `+5` and the `2` become less important than `n` itself.


#### Time Complexity


Time Complexity describes the amount of time an algorithm takes to run as a function of the input size (`n`). We count the number of basic operations (comparisons, assignments, arithmetic operations) rather than actual time.


#### Space Complexity


Space Complexity describes the amount of memory an algorithm uses as a function of the input size (`n`). This includes memory for variables, data structures, and the call stack for recursive algorithms.


#### Common Big O Notations and What They Mean:


  1. `O(1)` (Constant Time):

* The algorithm takes the same amount of time/space regardless of the input size. It's the most efficient.

* *Example:* Accessing an element in an array by its index: `A[5]`. Getting the first element of a list.


  1. `O(log n)` (Logarithmic Time):

* The time/space grows very slowly as the input size increases. Typically involves repeatedly halving the input.

* *Example:* Binary Search.


  1. `O(n)` (Linear Time):

* The time/space grows proportionally to the input size. If you double the input, you roughly double the time/space.

* *Example:* Linear Search, iterating through a list once, printing all elements.


  1. `O(n log n)` (Linearithmic Time):

* More efficient than `O(n^2)` but less efficient than `O(n)`. Often seen in efficient sorting algorithms.

* *Example:* Merge Sort, Quick Sort (average case).


  1. `O(n^2)` (Quadratic Time):

* The time/space grows with the square of the input size. If you double the input, the time/space roughly quadruples. Often involves nested loops.

* *Example:* Bubble Sort, Insertion Sort, comparing every item to every other item.


  1. `O(2^n)` (Exponential Time):

* The time/space doubles with each additional element in the input. Extremely inefficient for anything beyond very small `n`. Often seen in brute-force solutions to complex problems.

* *Example:* Finding all possible subsets of a set.


Importance of Complexity:

Understanding complexity allows us to choose the most efficient algorithm for a given task, especially when dealing with large datasets. An `O(n log n)` algorithm will be dramatically faster than an `O(n^2)` algorithm for large `n`.


For example, sorting 1 million items:

* `O(n)`: 1,000,000 operations

* `O(n log n)`: 1,000,000 * log(1,000,000) ≈ 1,000,000 * 20 = 20,000,000 operations

* `O(n^2)`: 1,000,000 * 1,000,000 = 1,000,000,000,000 operations


The difference is astronomical! Choosing `O(n log n)` over `O(n^2)` could mean seconds versus days or even years of computation.


Putting it all together: Computational Thinking in Action


When faced with any problem, big or small, you can apply these computational thinking skills:


  1. Decompose: Break the problem down into smaller, manageable pieces.
  2. Abstract: For each piece, identify the essential information and ignore the irrelevant details.
  3. Pattern Recognize: Look for patterns or similar sub-problems you've encountered before.
  4. Algorithm Design: Develop a step-by-step solution (algorithm) for each sub-problem, considering its efficiency (complexity).

This systematic approach is not just for computer scientists; it's a powerful tool for solving problems in any field, whether you're optimizing traffic flow in Karachi, managing inventory in a Lahore bazaar, or planning your future career!


Keep practicing these techniques, and you'll become a master problem-solver, ready for your A-Level exams and beyond.

Key Points to Remember

  • 1Computational Thinking involves decomposition, pattern recognition, abstraction, and algorithms.
  • 2Problem Decomposition breaks complex problems into smaller, manageable sub-problems.
  • 3Abstraction focuses on essential details, ignoring irrelevant ones to create general models.
  • 4An Algorithm is a precise, finite, unambiguous set of instructions to solve a problem.
  • 5Sorting algorithms (Bubble, Insertion, Merge) arrange items; Merge Sort (`O(n log n)`) is generally more efficient than Bubble/Insertion Sort (`O(n^2)`).
  • 6Searching algorithms (Linear, Binary) find items; Binary Search (`O(log n)`) is much faster than Linear Search (`O(n)`) but requires a sorted list.
  • 7Algorithmic Complexity (Big O Notation) measures how an algorithm's time or space requirements scale with input size `n` in the worst case.
  • 8Common Big O complexities include `O(1)`, `O(log n)`, `O(n)`, `O(n log n)`, `O(n^2)`.

Pakistan Example

Optimizing a Lahori Bazaar's Inventory

Imagine a bustling anarkali bazaar in Lahore with hundreds of shopkeepers managing thousands of products daily. Applying computational thinking can help optimize inventory: decomposing the market into individual shops, abstracting product details (name, price, stock) while ignoring color or brand if not essential, and using sorting (by expiry date) and searching (for specific items) algorithms to manage stock efficiently and reduce waste, improving overall business operations.

Quick Revision Infographic

Computer Science — Quick Revision

Computational Thinking & Algorithms

Key Concepts

1Computational Thinking involves decomposition, pattern recognition, abstraction, and algorithms.
2Problem Decomposition breaks complex problems into smaller, manageable sub-problems.
3Abstraction focuses on essential details, ignoring irrelevant ones to create general models.
4An Algorithm is a precise, finite, unambiguous set of instructions to solve a problem.
5Sorting algorithms (Bubble, Insertion, Merge) arrange items; Merge Sort (`O(n log n)`) is generally more efficient than Bubble/Insertion Sort (`O(n^2)`).
6Searching algorithms (Linear, Binary) find items; Binary Search (`O(log n)`) is much faster than Linear Search (`O(n)`) but requires a sorted list.
Pakistan Example

Optimizing a Lahori Bazaar's Inventory

Imagine a bustling anarkali bazaar in Lahore with hundreds of shopkeepers managing thousands of products daily. Applying computational thinking can help optimize inventory: decomposing the market into individual shops, abstracting product details (name, price, stock) while ignoring color or brand if not essential, and using sorting (by expiry date) and searching (for specific items) algorithms to manage stock efficiently and reduce waste, improving overall business operations.

SeekhoAsaan.com — Free RevisionComputational Thinking & Algorithms Infographic

Test Your Knowledge!

8 questions to test your understanding.

Start Quiz