14 problem solving patterns for coding interviews

By Career Mawa

Published On:

Table of Contents

14 problem solving patterns for coding interviews

Regardless of whether you are a student, job seeker, or want to be a developer, it is your magic ticket to ace coding interviews with problem-solving patterns. These patterns enable you to recognize the root logic of any question and refrain from starting every time from square one. We will discuss here 14 vital coding patterns that are explained simply with real-time examples and guidelines.

14 problem solving patterns for coding interviews

1. Sliding Window Pattern

Introduction

The Sliding Window pattern is a basic algorithm design technique that works best with problems dealing with linear data structures such as arrays and strings. It improves the process of searching for subarrays or substrings that satisfy a given condition by keeping a window that traverses the data structure, minimizing the use of nested loops and hence improving efficiency.

When to Use

This pattern is best suited when:

  1. You are dealing with issues that need to consider all neighboring subarrays or substrings of a particular length.
  2. The issue is to find maximum/minimum sum, average, or any certain condition of a subarray or substring.
  3. You are working with linear data structures such as arrays or strings.

Types of Sliding Window

  1. Fixed-size Sliding Window: The window size is fixed during traversal.
    Example: Maximum sum of a subarray of size k.
  2. Variable-size Sliding Window: The window size increases according to certain conditions.
    Example: Longest substring with at most k unique characters.

How It Works

  1. Overall concept: using two pointers to outline the current window’s boundary; when scanning over the data structure:
  2. Expand the window: advancing the end pointer.
  3. Shrink the window: advancing the start pointer depending on certain conditions

Update the answer based on whether the current window is bad or not. So, this will scan through all elements no more than twice in the best scenario with time complexity O(n).

Example Problem

Problem: Integers array and integer k, compute the max sum of any subarray of size k.

Naive Solution: Compute all the possible subarray sums of size k with time complexity O(n*k).

Optimized Solution Using Sliding Window:

Explanation:

  1. Initialize the window sum with sum of first k elements.
  2. Step through the increment of the window one element at a time:
  3. Cancel the element leaving the window.
  4. Insert the new element joining the window.
  5. Raise the maximum amount if the sum of the current window is greater.

This reduces the time complexity to O(n), which is significantly better for large data sets.

Most common Interview Questions

  1. Longest substring without repeating characters.
  2. Maximum sum subarray of size k.
  3. Minimum size subarray sum.
  4. Longest substring with at most k distinct characters.

Tips for Mastery

  1. Practice recognizing problems that have contiguous sequences involved.
  2. Make yourself comfortable with both fixed-size and variable-size window situations.
  3. Use the sliding window technique in other contexts to appreciate how versatile it can be.

2. Two Pointers Pattern

Introduction

The Two Pointers pattern is an effective problem-solving scheme employed to minimize time complexity in algorithms where there is searching or comparing of elements in arrays or linked lists. The fundamental concept is to employ two pointers (or indices) that move through the data structure in opposite directions or at varying velocities.

It is commonly used in sorting, searching, and optimization problems. If properly used, it can substitute nested loops and reduce time complexity from O(n²) to O(n) or O(n log n).

When to Use

You’ll often use the Two Pointers technique when:

  1. The input array or string is sorted or needs to be traversed from both ends.
  2. You are looking for a pair or triplet that satisfies a condition (like sum = target).
  3. You want to reverse, partition, or filter elements in-place.
  4. You need to find palindromes or solve problems related to linked lists (like cycle detection).

Types of Two Pointer Techniques

  1. Opposite Ends: Begin one pointer at the start and the other at the end of the data structure.
    Use case: Finding pairs that add up to a target value in a sorted array.
  2. Same Direction (Fast & Slow): Begin both pointers at the same end and advance them at different rates.
                    Use case: Cycle detection in linked lists or removing duplicates.

How It Works

The technique usually involves:

  1. Initializing two pointers: left and right.
  2. Moving the pointers based on specific logic:
    1. If the current condition is met, update the result.
    2. Otherwise, move one of the pointers to progress.

This results in a linear scan where every element is visited just once.

Example Problem

Problem: Given a sorted list of integers, determine if there are two numbers that add up to a target value.

Naive Solution: Iterate over two nested loops and attempt each pair – O(n²) time.

Improved Solution Using Two Pointers:

 

Explanation:

  1. Because the array is sorted, you can compare the sum of two elements.
  2. If the total is too low, shift the left pointer to raise it.
  3. If the total is too high, shift the right pointer to lower it.
  4. This gives a fast O(n) solution.

Frequent Interview Questions

  1. Delete duplicates from an array that is sorted.
  2. Reverse a string or linked list in-place.
  3. Two Sum in a sorted array.
  4. Container with most water.
  5. Palindrome check.

Linked List Example: Detecting a Cycle

You can also use fast and slow pointers in linked lists:

 


This is known as Floyd’s Cycle Detection Algorithm, a classic example of the Two Pointers pattern.

Tips for Mastery

  1. Two pointers are extremely helpful in decreasing time complexity for sorted input.
  2. Understand the pointer movement logic: when to move left and when to move right.
  3. Always consider edge cases empty input, duplicates or odd-length arrays.
  4. Do practice problems related to palindrome checking, pair matching, or in place operations.

3. Fast and Slow Pointers Pattern

Introduction

The Fast and Slow Pointers (also referred to as the “Tortoise and Hare” method) is a beautiful and effective pattern primarily employed in linked list and cycle detection questions. It uses two pointers moving through a data structure at varying speeds—typically, one travels twice as quickly as the other.

This method is critical when you’re dealing with cyclic patterns, middle elements or when brute force checking all elements is not efficient. It enables you to perform operations in O(n) time with O(1) space which is very much sought after in technical interviews.

When to Use

The Fast and Slow Pointers pattern is your best friend when:

  1. You need to find cycles in a linked list.
  2. You must get the middle of a linked list.
  3. You’re working with palindromes in arrays or linked lists.
  4. You’re solving problems where one pointer needs to move faster to catch or meet the other.
  5. You need to identify the starting node of a loop.

How It Works

The strategy uses two pointers:

  • Slow pointer: moves one step at a time.
  • Fast pointer: takes two steps per time.

By doing so, if there’s a loop or some repeated structure, the fast pointer will overtake the slow pointer and they will meet. In case there’s no loop, the fast pointer will move to the end of the graph.

This method reduces unnecessary passes over the data and doesn’t require using extra data structures such as hash sets.

Example 1: Detect Cycle in a Linked List

Problem: For a given head of a linked list, find whether it contains a cycle.

Explanation:

  1. slow pointer advances one step, the fast two steps.
  2. If there is a cycle, they will meet after some time.
  3. If not the fast pointer will hit the end (None).

Time Complexity: O(n)
Space Complexity: O(1)

Example 2: Find Middle of Linked List

Problem: Return the middle node of a singly linked list.

 

Explanation:

  1. When the fast pointer hits the end, the slow one is in the middle.
  2. Very efficient, don’t need to compute length first.

Common Interview Questions

  1. Detect loop in a linked list.
  2. Find the beginning node of the cycle.
  3. Determine whether a linked list is a palindrome.
  4. Find the middle node in a linked list.
  5. Happy Number problem (LeetCode classic).

Why It Works

Let’s understand the logic with a cycle detection:

Suppose there’s a loop, and the fast pointer is 2x faster. Eventually, it will catch up to the slow one inside the cycle, as they’re both looping around infinitely. This guarantees a meeting point if a cycle exists.

Tips for Mastery

  1. Use this when you need real-time detection without extra space.
  2. Make sure you handle edge cases like null pointers.
  3. It’s mostly applicable in linked lists, but also useful in numerical patterns (e.g., “Happy Number”).
  4. In advanced problems, you can extend this to find the length or start point of the cycle after detection.

4. Merge Intervals Pattern

Introduction

The Merge Intervals pattern applies to problems where you’re presented with a list of intervals (typically start and end times), and you need to merge overlapping intervals, insert an interval into the correct position, or compute non-overlapping time. It appears frequently in scheduling, calendar booking, and resource allocation problems.

It’s extremely popular in technical interviews at companies like Google, Microsoft, and Facebook because it tests your ability to sort data, recognize overlapping conditions, and use logic to merge or separate ranges.

When to Use

Use the Merge Intervals pattern when:

  1. You’re provided with ranges of intervals (start and end times, or x/y coordinates).
  2. You’re required to combine overlapping intervals.
  3. You want to determine available slots or conflicts among events.

The problem contains ranges, lengths, or timeline overlaps.

These are typically problems concerning real-world situations such as scheduling meeting rooms or task scheduling.

How It Works

The basic strategy is:

  1. Sorting the intervals by the starting point.
  2. Initializing a result list with the first interval.
  3. Looping through the remaining intervals:
    1. If the current interval intersects with the last added one, merge them.
    2. Otherwise, add the current interval to the result.

This is a typical greedy approach — you make local choices (merge or not) at each step to arrive at the global solution.

Example: Merging Overlapping Intervals

Problem: Take an array of intervals and merge all the overlapping intervals and return a new array.

 


Input: [[1,3],[2,6],[8,10],[15,18]]
Output: [[1,6],[8,10],[15,18]]

Explanation:

  1. Intervals [1,3] and [2,6] intersect, so we combine them to [1,6].
  2. Others don’t overlap, so we add them as is.

Common Interview Questions

  1. Merge overlapping intervals.
  2. Insert a new interval into a sorted list.
  3. Find common free time across employees.
  4. Minimum number of meeting rooms required.
  5. Remove covered intervals.
  6. Interval list intersections.

Why Sort First?

Sorting based on start time guarantees we’re always comparing intervals in the right order. Without sorting, merging intervals would need more complicated logic and higher time complexity.

Time Complexity:

  • Sorting: O(n log n)
  • Merging: O(n)
  • Total: O(n log n)

Space Complexity: O(n) for output list.

Real-Life Analogy

Suppose you’re a calendar organizer attempting to minimize duplicate meetings:

  1. You sit down and sort meetings that conflict.
  2. You schedule them into one time block. That is precisely what the Merge Intervals pattern accomplishes.

Mastering Tips

  1. Always sort intervals first — it’s the most important step.
  2. Understand the overlap condition: current_start <= last_end.
  3. Master both merge and insert forms of the problem.
  4. Practice edge cases: completely nested intervals, same-start-time intervals, etc.
  5. Imagine the merging process on timelines or on paper — it helps a lot.

5. Sliding Window Pattern

Introduction

The Sliding Window pattern is one of the most powerful techniques in algorithm design, particularly when working with arrays or strings. It helps reduce nested loops and enables you to process chunks of data (a “window”) in linear time. This pattern is widely used to find subarrays or substrings that meet a certain condition — such as maximum sum, longest substring without repeating characters, or the smallest subarray with a target sum.

Instead of recalculating results repeatedly for overlapping subarrays, the sliding window technique reuses previous computations, resulting in significant performance improvements.

When to Use

Use the sliding window pattern when:

  1. You’re given a contiguous block of data (e.g., subarrays or substrings).
  2. You need to optimize a range-based operation (max/min/average/length).
  3. The problem asks for maximum or minimum within a subarray of size k.
  4. You’re scanning through a window that slides over time (like a moving frame).
  5. This is highly common in interview questions involving strings and arrays.

How It Works

There are two main variations:

  1. Fixed-size sliding window
    Example: Get the maximum sum of a subarray of size k.
  2. Dynamic-size sliding window
    Example: Get the length of the longest substring with no repeating characters.

The basic concept is to:

  1. Begin with a “window” at the start of the array/string.
  2. Increase or decrease the window depending on conditions.
  3. Utilize two pointers (often left and right) to indicate the current window.
  4. Store results (e.g., max sum, length, characters) as you move the window ahead.

Example 1:

 

Explanation:

  1. Begin with finding the sum of the first k elements.
  2. Then for every subsequent element, move the window one position forward by appending the new element and removing the element that just exited the window.
  3. This provides O(n) efficiency rather than O(n*k) using nested loops.

Example 2: Longest Substring Without Repeating Characters (Dynamic-size)

Explanation:

  • Shift the window to the right, and shift the left pointer when you encounter a repeat character.
  • This leaves you with an interval of one-of characters.

Standard Interview Questions

  1. Longest substring with no repeating characters
  2. Maximum sum subarray with size k
  3. Minimum window substring
  4. Smallest subarray that has a sum greater than or equal to some value
  5. Longest substring that contains at most K distinct characters

Why It Works

Sliding Window strategy prevents you brute-force checking out all possible subarrays. Instead of re-processing already encountered data, it just shifts the window along, altering variables in constant time.

Time Complexity: O(n)
Space Complexity: Problem-dependent, typically O(1) or O(n)

Mastering Tips

  1. Practice fixed and dynamic window implementations.
  2. Be careful about when to reduce or expand the window.
  3. Utilize hashmaps or counters for counting characters/elements in strings.
  4. Clearly establish your window boundaries (left and right pointers).
  5. Visualize with examples — it is helpful to sketch the window movement.

6. Two Pointers Pattern

Introduction

The Two Pointers pattern is an extremely common design pattern that proves particularly useful in the case of sorted arrays or linked lists but is also found to be applicable in string-related problems. It involves using two distinct pointers (usually beginning at opposite ends or adjacent positions) to solve some problem in a single pass and hence lowering the time complexity remarkably.

This pattern is most commonly employed in interview questions regarding searching for pairs, reversing listseliminating duplicates, or comparing elements in a systematic manner. It makes a lot of brute-force solutions reduce from O(n²) to O(n).

When to Apply

You may want to employ the two pointers pattern when:

  1. The input array or string is sorted.
  2. You are tasked with finding pairs meeting certain requirement (such as two numbers summing to a target).
  3. The problem involves moving from both ends toward the center.
  4. You’re trying to partition data or find palindromes, duplicates or subarrays.
  5. The pattern is especially useful in problems where searching, matching or comparison of elements is involved.

How It Works

There are two main variations:

1. Opposite-direction pointers (start at both ends)

  • Used to find pairs or to reverse data.
  • Example: Pair with target sum in a sorted array.

2. Same-direction pointers (both move forward)

  • Beneficial for scanning and filtering elements.
  • Example: Duplicates removalsubarrays finding.

The main steps:

  1. Initialize two pointers (e.g., left = 0, right = len(arr) – 1).
  2. Move them towards each other depending on some condition.
  3. Compare or update values as required.

Example 1: Two Sum (Sorted Array)

Problem: Find two numbers in a sorted array that add up to a target sum.



Input: [2, 4, 6, 8, 11], target = 10
Output: [2, 8]

Example 2: Reverse a String In-place

This is an old in-place problem that can be solved well with two pointers converging toward the center.

Common Interview Questions

  1. Two sum in a sorted array
  2. Remove duplicates in-place
  3. Container with the most water
  4. Move zeros to the end
  5. Reverse a linked list or string
  6. Palindrome check

Why It Works

The two pointers method enables you to iterate over the array in linear time by eliminating unnecessary comparisons. Rather than attempting every possible combination (which would be O(n²)), you make a better choice at each step.

Time Complexity: O(n)
Space Complexity: O(1) (most problems 
employing this are in-place)

Real-Life Analogy

Suppose you and a friend stand at opposite ends of a bookshelf and try to find two books whose costs sum up to your budget. You both take one book and make changes according to whether the sum is too large or too small — thattwo-pointer technique in practice.

Master Tips

  1. Always ensure that the input is sorted — thats a major hint to employ two pointers.
  2. Choose if your pointers should go in the same direction or opposite directions.
  3. Practice both in-place operations and search algorithms with two pointers.
  4. Fuse with other patterns such as sliding window or binary search for combined problems.

7. Fast and Slow Pointers Pattern (Tortoise and Hare Method)

Introduction

The Fast and Slow Pointers design, or Tortoise and Hare algorithm, is applied regularly in problems about linked lists or cyclic arrays. The concept is straightforward but mighty — utilize two pointers moving with varying speeds (usually one moving one step while the other is moving two steps) to spot patterns, crossings, or loops in a structure.

This approach is extremely efficient in finding solutions to problems like cycle detection, middle node identification and intersection point detection in arrays or linked lists.

When to Use

Use the fast and slow pointers pattern when:

  1. You want to find cycles in a linked list or an array.
  2. You’re required to find the middle of a list in a single pass.
  3. You’re working on loop detection, palindromic structures, or Floyd’s Cycle Detection Algorithm.
  4. You want to check if a sequence is circular, especially in repeated processes like digits, pointers, or hashmaps.

How It Works

You set two pointers:

  • Slow Pointer – moves one step each time.

Fast Pointer – moves two steps at a time.

The reasoning holds because if there is a loop or cycle, the fast pointer will catch up with the slow pointer within the cycle. If there is no cycle, the fast pointer will reach the end (null) without ever meeting the slow pointer.

If a meeting occurs, you can use additional logic to find the entry point of the cycle or other key positions.

Example 1: Find a Cycle in a Linked List

Explanation: If there is a loop, fast will catch up with the slow pointer within the loop. If there is no loop, fast will reach the end of the list.

Example 2: Find the Middle of a Linked List

Explanation: When the fast reaches the end, the slow will be at the middle.

Common Interview Questions

  1. Find cycle in a linked list
  2. Find the beginning of the cycle
  3. Find middle of a linked list
  4. If a linked list is a palindrome (stack + two pointers)
  5. Happy number (digit cycle)
  6. Intersection of two linked lists

Why It Works

The slow and fast pointer method effectively recognizes significant structural features of data without requiring additional memory. It’s space-efficient since it runs in linear time, and it’s beautiful as it eliminates the necessity for hash sets or recursion in the majority of scenarios.

Time Complexity: O(n)
Space Complexity: O(1)

Real-Life Analogy

Consider two friends jogging around a circular track — one jogging twice as quickly as the other. If they keep running, the faster friend will eventually lap the slower one — they’ll meet. This is exactly how the cycle detection logic works in data structures.

Tips for Mastery

  1. Practice linked list problems extensively — this pattern is common there.
  2. Visualize the movement of fast and slow pointers with diagrams to grasp the logic.
  3. Use with cycle length calculation or cycle start finding to get to the bottom of understanding.
  4. Understand that this pattern tends to obviate the use of recursion or additional space.

8. Merge Intervals Pattern

Introduction

The Merge Intervals pattern can be used in problems dealing with overlapping ranges, time slots, bookings or intervals on a number line. The concept is simple: sort the intervals by their start times and then combine overlapping intervals into a single interval.

This pattern is very handy in practical applications like calendar scheduling, range merging, event processing, IP range filtering, and memory allocation. Interviewers adore this pattern since it checks how well you can manage sorting, edge cases, and iteration logic.

When to Use

You should use the Merge Intervals pattern when:

  1. You’re presented with a list of intervals (start and end times) and you need to merge overlapping ones.
  2. You must find gaps, availability, or minimum resources (such as meeting rooms).
  3. The issue entails scheduling, task scheduling, or timeline shifts.
  4. You must provide the simplified version of a list of time ranges.

The solution to these problems lies in sorting the input, merging step by step, comparing each interval with the previous merged one.

How It Works

The overall steps of the Merge Intervals pattern are:

  1. Sort all the intervals by their start times.
  2. Set up a result list with the first interval.
  3. For each of the rest of the intervals:
    • If the current interval is overlapping with the last one in the result, merge them by updating the end time.
    • Otherwise, append the current interval as is.
  4. Return the result list.

This method has the effect of combining overlapping intervals and separating non-overlapping ones.

Example: Merging Overlapping Intervals

Problem: Given a list of intervals, merge all the overlapping intervals.

Input: [[1, 4], [2, 5], [7, 9]]
Output: [[1, 5], [7, 9]]

Common Interview Questions

  1. Merge overlapping intervals
  2. Insert a new interval and merge
  3. Find minimum number of meeting rooms
  4. Employee free time
  5. Merge calendar appointments
  6. Task scheduling with overlapping durations

Why It Works

This design works well since sorting guarantees you know where overlap can occur at any time. By merging in sorted order, you can make the decision of whether to merge or append in linear time. It’s an excellent example of a greedy algorithm.

Time Complexity: O(n log n) for sorting + O(n) for merging
Space Complexity: O(n) for storing the merged intervals

Real-Life Analogy

Think about booking meeting rooms. If two people have overlapping bookings — one from 10 AM to 11 AM and another from 10:30 AM to 12 PM — you’ll have to merge these into a single slot from 10 AM to 12 PM. This is exactly what this pattern does.

Tips for Mastery

  1. Always sort the intervals before merging — skipping this step leads to incorrect results.
  2. Practice problems involving gaps and overlaps to get familiar with boundary conditions.
  3. Use this pattern with calendar apps, booking systems, or memory range allocation.
  4. Understand how to modify the merging logic when you’re asked to find unoccupied or free time instead.

9. Cyclic Sort Pattern

Introduction

The Cyclic Sort pattern is a simple yet highly effective algorithmic technique used primarily to solve problems involving arrays with elements in a known range, especially from 1 to n. The key idea is to place each number at its correct index — that is, the value 1 should go to index 0, value 2 to index 1, and so on.

What makes cyclic sort powerful is that it enables many problems to be solved in O(n) time without using extra space, which is especially useful in interviews where you’re expected to come up with in-place and optimized solutions.

When to Use

This pattern is particularly useful when:

  1. You are given an array of n elements ranging from 1 to n or 0 to n-1.
  2. The array may contain duplicates, missing numbers, or extra numbers.
  3. You need to solve in-place with constant space.
  4. You’re asked to find a missing number, duplicate number, or set the array in order.

If the input is within a known range and elements should ideally be placed at a specific index, this pattern should come to mind immediately.

How It Works

Here’s the simple logic behind Cyclic Sort:

  1. Iterate through the array from start to end.
  2. For each element at index i, compare the value with the value at its correct position (e.g., nums[i] should be at nums[nums[i] – 1]).
  3. If the value is not at the correct index, swap it with the value at its correct index.
  4. If it is at the right place, move to the next index.

This continues until the entire array is sorted into its correct “cyclic” positions.

Example: Sort an Array from 1 to n

Problem: Given an array with numbers from 1 to n with one number missing and one duplicate, identify both.

Input: [3, 1, 2, 5, 3]
Output: Duplicate = 3, Missing = 4

Python Code:

Common Interview Questions

  1. Find the missing number in array
  2. Find all missing numbers in an array
  3. Find the duplicate number
  4. Find both duplicate and missing
  5. Set mismatch (LeetCode)
  6. First missing positive integer

Why It Works

This algorithm uses the fact that values are bounded in a range, and hence can be placed directly at their correct positions. As a result, we can sort or detect mismatches without using extra space.

Time Complexity: O(n) — each number is swapped at most once
Space Complexity: O(1) — in-place swapping

Real-Life Analogy

Imagine placing numbered folders in a cabinet where each folder should go into a specific slot. If folder #3 is in slot 1, you just move it to slot 3. Keep moving them to the right places until each slot has the correct folder — that’s cyclic sort in action.

Tips for Mastery

  1. Start recognizing problems where array values map directly to indices.
  2. Practice swapping values to correct places without using visited arrays.
  3. Be cautious with duplicate values — they can cause infinite loops if not handled properly.
  4. Combine with patterns like Floyd’s Cycle Detection when dealing with linked values.

10. Sliding Window Pattern

Introduction

The Sliding Window pattern is one of the most efficient ways to solve problems that involve working with subarrays or substrings of fixed or dynamic length. It helps optimize the time complexity of problems that would otherwise require nested loops or brute force approaches.

The idea is simple: rather than recalculating the solution for each possible subarray, we maintain a “window” that can slide through the array or string, updating values as we go. This method reduces unnecessary recalculations and helps solve problems in linear time.

When to Use

You should apply the Sliding Window pattern when:

  1. You are given an array or string and asked to find something related to subarrays or substrings (e.g., sums, counts, maximum/minimum values).
  2. The problem requires you to keep track of values in a window of variable size (e.g., sum of k consecutive elements).
  3. The problem asks you to find the longest, smallest, or shortest substring that satisfies certain conditions (e.g., contains all unique characters, all vowels, etc.).
  4. It’s crucial to reduce the time complexity from O(n^2) to O(n).

How It Works

  1. Define the window: This can be a subarray or substring. Start with an initial window, which might be a single element or a few elements, depending on the problem.
  2. Expand the window: Move the right pointer to expand the window to include more elements.
  3. Shrink the window: Move the left pointer to shrink the window when a condition is violated or met, depending on the problem requirements.
  4. Update your results: Track the relevant information within the window as you slide through the array or string.

Sliding the window means you can process the input in a single pass (O(n) time), making it far more efficient than using brute force methods.

Example: Longest Substring Without Repeating Characters

Problem: Given a string, find the length of the longest substring without repeating characters.

Input: “abcabcbb”
Output: 3

Explanation: The answer is “abc”, with the length of 3.

Common Interview Questions

  1. Longest substring without repeating characters
  2. Maximum sum of a subarray of size k
  3. Find all anagrams in a string
  4. Smallest subarray with a sum greater than or equal to a given number
  5. Longest substring with at most two distinct characters
  6. Sliding window maximum (maximum value in each window of size k)

Why It Works

The Sliding Window pattern works efficiently because it allows us to process the array or string in one pass by updating the solution incrementally rather than recalculating the entire solution each time.

By maintaining and adjusting a fixed-size window, we avoid redundant calculations and achieve O(n) time complexity.

Time Complexity: O(n) — since each element is processed at most twice (once when expanding and once when shrinking the window).
Space Complexity: O(k) — where k is the size of the window, which may be O(1) or O(n) depending on the problem.

Real-Life Analogy

Imagine you’re tracking your step count in a fitness app. Each day, you record the number of steps. You want to know the average steps for the past k days. Instead of recalculating the average for every k-day window from scratch, you slide over the days and keep track of the sum of the current window. If you add a new day’s steps, you subtract the old day’s steps that fall out of the window and then update the sum.

Tips for Mastery

  1. Start by identifying problems that ask for subarrays or substrings with certain properties.
  2. Practice solving both fixed-size window (e.g., sum of k consecutive elements) and dynamic-size window problems (e.g., longest substring without repeating characters).
  3. Ensure you understand when to expand and when to shrink the window — this depends on whether you are trying to maximize or minimize something.
  4. Master the application of a hashmap or set to track elements inside the window, which is common in string/character problems.

11. Top K Elements Pattern

Introduction

Top K Elements pattern is often employed in coding interviews to solve issues regarding finding the top or bottom k elements in a dataset. Regardless of whether you’re asked to find the k largest elements, the k smallest elements, or some other variation, this pattern gives efficient solutions using advanced data structures like heaps and sorting algorithms.

This design is particularly relevant in cases with large datasets where brute force strategies (e.g., sorting the entire dataset) are not efficient. By being concerned with just the top k elements, we can cut down the time complexity considerably.

When to Use

You should use the Top K Elements pattern when:

  1. You must find the k largest or k smallest elements of a dataset.
  2. The issue is a data stream where numbers are constantly added, and you must maintain a record of the top k values in real time.
  3. You must minimize time and space complexity, particularly when faced with large input.
  4. The question requests efficient algorithms that do not involve sorting the entire data set.

Some common problem types include:

  1. Determining the k smallest or largest elements in an array.
  2. Finding the k frequent elements in an array.
  3. Finding the k most similar elements to a value.
  4. Real-time identification of the top k elements in a stream of data.

How It Works

There are many ways to efficiently solve Top K Elements problems based on the constraints of the problem. The most common techniques are:

1. Sorting:
  1. One of the simplest solutions is to sort the array and then take the first k elements (for smallest) or last k elements (for largest).
  2. Time complexity: O(n log n) because of sorting.
2. Heap (Priority Queue):
  1. better solution is to use a min-heap to find the k largest elements or a max-heap to find the k smallest elements.
  2. By keeping a heap of size k, you can go through the dataset and keep only the top k elements.
  3. Time complexity: O(n log k), which is significantly less than O(n log n) when k is significantly less than n.
3. QuickSelect Algorithm:
  1. This algorithm is a divide-and-conquer algorithm like quicksort, but it only cares about finding the kth largest or smallest element.
  2. Once the kth element is found, the rest of the elements can be sorted easily.
  3. Time complexity: O(n) on average, but O(n^2) in the worst case.

Example: Find the K Largest Elements

Problem: Find the k largest elements given an unsorted array.

Input: [3, 1, 4, 1, 5, 9, 2, 6, 5, 3], k = 4
Output: [9, 6, 5, 5]

Python Code using Heap:

Common Interview Questions

  1. Find the k largest elements in an array.
  2. Find the k smallest elements of an array
  3. Find the k most common elements of an array.
  4. Find the k nearest elements to a particular value.
  5. Streaming problems: Monitor top k elements of real-time data.

Why It Works

The Top K Elements pattern minimizes superfluous work by only operating on the k elements we care aboutRather than sorting a whole dataset, with heaps or optimized algorithms such as QuickSelect we are able to get the top k elements much quicker.

By employing a min-heap (for largest) or max-heap (for smallest), we can effectively keep the top k elements in place as we go through the array. The logarithmic complexity of the heap ensures that we only perform swaps when required to maintain the top k elements, making the solution both time and space efficient.

Real-Life Analogy

Suppose you’re a judge at a talent show with a big set of contestants, and you want to select the best k performers. Instead of comparing each contestant with every other contestant numerous times (inefficient), you keep a list of the current top performers and revise it as you pass through each contestant. If anyone is superior to the worst in your current list, he replaces him. This is the heap-based method for finding the top k elements.

Tips for Mastery

  1. Practice heap operations like inserting elements, deleting the smallest or largest element, and heapifying.
  2. Understand the trade-offs between using sorting and heaps based on the problem constraints.
  3. For real-time data tracking, think about sliding window combined with heap operations to efficiently manage incoming data.
  4. QuickSelect is an advanced approach for efficiently finding the kth largest/smallest element without fully sorting the array.

12. Depth-First Search (DFS) Pattern

Introduction

Depth-First Search (DFS) is a graph traversal algorithm to search all the nodes of a graph or tree in a depthward direction. This design is particularly helpful in problems where you need to traverse tree-like structures or graphs and find all possible paths from a given node or state. DFS can be done both recursively and iteratively and is generally applied where you need to explore all possibilities first before backtracking.

DFS is a basic graph theory concept and is commonly applied to problems involving maze-solving, pathfinding, cycle detection, and topological sorting, among others.

When to Use

Use the DFS pattern in the following scenarios:

  1. When you want to traverse all the possibilities in a graph or tree structure.
  2. When the problem involves visiting every node or element prior to visiting its neighbors (or children).
  3. When you have to backtrack and look for other ways after hitting dead ends.
  4. Pathfinding, cycle detection, topological sorting, and graph traversal problems.

Typical problem examples include:

  1. Maze or puzzle solving.
  2. Filling paths from one node to another in a graph.
  3. Cycles detection in directed or undirected graphs.
  4. Topological sorting of directed acyclic graphs (DAGs).
  5. Connected component checking in an undirected graph.

How It Works

DFS goes as far along a branch of the graph as possible before it backtracksBeginning at a node, DFS will recursively or iteratively go through each child node, visiting the deepest node first before backtracking to visit the rest of the nodes. The algorithm can be coded using either a stack (iterative) or recursion (recursive).

1. Recursive DFS:

In recursive DFS, the algorithm employs the system call stack to visit nodes. It maintains nodes visited to prevent infinite loops in cyclic graphs.

2. Iterative DFS:

In iterative DFS, an explicit stack is utilized to simulate the recursive process. This is helpful when working with big graphs or where recursion depth could lead to a stack overflow.

Heres a simple recursive implementation in Python:


Output: 0 1 3 4 2

Example: Pathfinding in a Maze

Problem: Given a maze represented by a grid, find if there is a path from the start (top-left corner) to the goal (bottom-right corner).

Input: A 2D grid in which is an open path and 0 is a wall.
Output: 
If there is a path from the start to the end.

Python Code using DFS:

Output: True

Common Interview Questions

  1. Traverse a given tree or graph using DFS both recursively and iteratively.
  2. Use DFS to check if a graph contains a cycle.
  3. Find the connected components of an undirected graph with DFS.
  4. Find the path from one node to another in a graph with DFS.
  5. Use DFS to solve maze problems by finding paths or cycles.

Why It Works

DFS works because it guarantees that all nodes are visited and explored as deeply as possible. Itespecially great when you have to backtrack and experiment with alternative possibilities. Using recursion or an explicit stack, DFS can effectively examine all nodes and find solutions, even in very large, complicated graphs or tree-like structures.

DFS uses less space regarding the nodes to be visited because you do not have to have all the nodes in memory at the same timein contrast to breadth-first search (BFS), whose queue can expand significantly.

The only issue here is that sometimes DFS won’t necessarily lead to the shortest distance in an unweighted graph when it digs down deep without planning the optimal course.

Real-Life Analogy

Suppose you’re in a huge library with a labyrinth of corridors, searching for particular book. The library contains several aisles, but each aisle is extremely long. You begin at the frontchoose an aisle, and just walk straight down it, scanning every book. If you get to the end and donsee it, you head back and begin down the next aisle. This is depth-first search—you go deep down one path until you reach a dead end, and then backtrack in an attempt at another path.

Tips for Mastery

  1. Practice backtracking using DFS: Most problems will ask you to explore several paths before backtracking, so concentrate on problems such as backtracking puzzles (e.g., Sudoku, N-Queens).
  2. Keep a watchful eye on recursive depth: For extremely large graphs, an iterative solution using an explicit stack can avoid recursion depth problems.
  3. Know its limitations: DFS does not necessarily guarantee finding the shortest path in an unweighted graph. If you require the shortest path, use BFS instead.

13. Breadth-First Search (BFS) Pattern

Introduction

Breadth-First Search (BFS) is another basic graph traversal algorithmAs opposed to DFS that goes as deep as it can, BFS visits all of the neighbors of a node level by level before continuing to the next level. BFS is especially helpful for discovering the shortest distance in an unweighted graph or solving problems where the shortest distance or nearest nodes are important.

The BFS pattern is used extensively in shortest path problems in graphs, searching for connected components, or even puzzles such as solving a sliding puzzle where the aim is to search for all the moves.

When to Use

You are supposed to use BFS in the following:

  1. Shortest Path: When you have to find the shortest path between two nodes in an unweighted graph.
  2. Level-order Traversal: BFS comes in handy if you want to traverse a graph or tree level by level.
  3. Connected Components: BFS can be used to identify the connected components in an undirected graph.
  4. Minimum number of moves: Issues such as word ladders or sliding puzzles in which the fewest number of steps should be taken to reach from one state to another.
  5. Flood Fill: BFS is also applied in image processing operationssuch as filling areas with a specific color.

Interview problems where BFS excels are:

  1. Finding the shortest path between two points in a maze.
  2. Solving problems such as the 8-puzzle or N-Queens.
  3. Identifying connected components within a graph.
  4. Level-order traversal in trees.

How It Works

BFS visits nodes level by level, i.e., it checks all the neighbors of a node before going on to the next level. BFS employs a queue to control the visiting process. When a node is visited, it is placed in the queue, and then its neighbors one by one.

The algorithm is as follows:

  1. Begin with the root node: Start with the starting node and place it in the queue.
  2. Visit the neighbors: Pop the front element of the queue and visit all of its neighbors.
  3. Enqueue unvisited neighbors: For each of the neighbors, if it hasnt been visited yet, add it to the queue and set it as visited.
  4. Repeat these steps until the entire graph has been visited or a solution is reached.

Example: Finding the Shortest Path in a Maze

Problem: Take a 2D grid describing a maze where 1 means an open way and 0 means walls. Find the minimum path from the top-left position to the bottom-right.

This BFS pattern applies because it ensures we visit all nodes of the same distance level before advancing to nodes in the next level so that we capture the shortest path.

Python Code:


Output: [(0, 0), (1, 0), (1, 1), (2, 1), (2, 2), (2, 3), (3, 3)]

Common Interview Questions

  1. Implement the shortest path in an unweighted graph or grid using BFS.
  2. Level-order traversal in a binary tree.
  3. Implement the connected components in an undirected graph using BFS.
  4. Find the minimum number of steps to go to the destination in a game (for instance, Knightminimum path in a chessboard).
  5. Whether two nodes are adjacent in an undirected graph.

Why It Works

BFS ensures the shortest path in unweighted graphs since it visits all nodes at the current depth level before proceeding to nodes at the next level. This ensures that the node is visited for the first time in the minimum number of steps.

BFS is more memory-demanding than DFS in that it keeps all nodes at the same depth level in the queue. Yetdue to its capability to ensure a shortest path, it becomes invaluable for some types of problems, e.g., the shortest path in a maze, or puzzle solutions like the 8-puzzle.

Real-Life Analogy

Imagine you’re in a large city and you need to find the shortest route to a particular destination. You start at your current location and begin checking the buildings right next to you. Once you have checked all the adjacent buildings, you then move on to the next set of buildings that are one step farther away. This is breadth-first search—you explore all immediate possibilities first before moving on to more distant ones.

Tips for Mastery

  1. Practice grid-based problems: BFS excels in problems such as mazes or grids, so practice problems such as pathfinding in grids, shortest path, or puzzle-solving suchas the Knights shortest path.
  2. Know the trade-offs: Though BFS assures the shortest path, it is slower in time complexity when there are huge graphs with numerous nodes per level.
  3. Use BFS for level-order traversal: BFS can be quite handy when working with trees, particularly when you want to process nodes level by level (e.g., print nodes in a tree).

14. Topological Sort Pattern

Introduction

Topological Sort is a linear ordering of vertices in a Directed Acyclic Graph (DAG) such that for every directed edge u -> v, vertex u precedes vertex v in the ordering. This is particularly helpful in problems involving dependencysuch as task scheduling or course prerequisites. That is, topological sort assists us in ordering tasks in a manner such that the constraint of dependency among them is satisfied.

The Topological Sort pattern is very useful in task scheduling, project management, and build systems where some tasks are dependent on the completion of others. It guarantees that tasks are executed in a manner where each task is only dependent on the ones that have been completed.

When to Use

You can use Topological Sort in scenarios such as:

  1. Task Scheduling: When there are dependencies between taskse.g., project scheduling or coursework.
  2. Course Prerequisites: Identifying the course sequence based on prerequisites.
  3. Build Systems: Identifying the build sequence of components or files based on their dependency.
  4. Dependency Resolution: For cases that involve the resolution of dependencies among components or tasks.

Some common interview questions where Topological Sort can be used include:

  1. Identifying a valid sequence of tasks based on their dependencies.
  2. Resolving problems such as Course Schedule or Project Scheduling.
  3. Finding the compilation order of programs in build environments where files depend on each other.

How It Works

Topological Sort operates by visiting nodes with no edges pointing to them and discarding them from the graph, again and again, until no node is left to visit. This can be achieved either through DFS (Depth-First Search) or Kahns Algorithm (via BFS).

DFS-based Approach
  1. Begin DFS from unvisited nodes.
  2. Set nodes as visited and follow the current node being visited.
  3. Push the node onto the stack once all adjacent nodes have been processed.
  4. In the end, reverse the stack to retrieve topological order.
Kahns Algorithm (BFS-based Method)
  1. Calculate in-degree (degree of edges intoeach node.
  2. Populate a queue with all the nodes that possess an in-degree of 0.
  3. Process the nodes from the queue one at a time:

    1. Insert the node to the result.
    2. Decrease its neighbors’ in-degree.
    3. In case of neighbors in-degree becoming 0, put it into the queue.
  4. If all nodes are visited, the final list is a valid topological sort.

Example: Course Schedule

Problem: You are given a set of courses and prerequisites and want to find out in which order you should take the courses to earn your degree.

For example:

  • Course 0 requires Course 1.
  • Course 1 requires Course 2.

We want to determine the order in which courses can be taken.

Solution using Topological Sort:

  1. Represent the prerequisites and courses as a graph.
  2. Use Topological Sort to determine the valid order.

Python Code (Using Kahn’s Algorithm):

Output: [0, 1, 2, 3]

The above output suggests that the correct order to study the courses is Course 0, Course 1, Course 2, and Course 3.

Common Interview Questions

  1. Course Schedule: Given a list of prerequisites, determine if it’s possible to finish all courses (i.e., check if a cycle exists in the graph).
  2. Project Scheduling: Given project dependencies, find the order in which projects should be completed.
  3. Task Scheduling: Given tasks and their dependencies, find the valid order to complete all tasks.
  4. Topological Sorting: Given a directed acyclic graph, determine a topological sort order.

Why It Works

Topological Sort guarantees that all the tasks are being done in the appropriate order, preserving their dependencies. It follows the rule of resolving dependencies where we initially work on tasks with no dependencies (in-degree 0), and then proceed to tasks with dependencies on those, and so forth. The end result is a series of tasks that can be finished without any dependency violation.

In case there‘s loop in the graph (i.e., one task relies on itself), the algorithm will either return an empty list or signal that it is impossible to do all tasks.

Real-Life Analogy

Suppose you‘re constructing a house, and several activities like painting, foundation work, and roofing need to be done in a particular order. You can’t paint the walls until the foundation is in place, and you can’t roof the house until the walls are erectedTopological sort will provide you with the proper order in which these activities need to be doneso everything ends up being executed in the right order.

Tips for Mastery

  1. Practice with Graphs: Topological Sort is mainly applied on directed acyclic graphs (DAGs). Practice problems around DAGs and their dependencies to make yourself proficient in this pattern.
  2. Know Kahns Algorithm: DFS-based and BFS-based algorithms both work for Topological Sort, but Kahns Algorithm (which is BFS-based) is generally more natural to apply to real-world scheduling issues.
  3. Handling Cycles: Make sure to handle cycles properly. If there is a cycle in the graph, no topological sort is possible, and the result should report failure.

Want to know about Personal potfolio website for students

Career Mawa

Career Mawa offers personalized career counseling, skill development, resume assistance, and job search strategies to help achieve career goals.

Leave a Comment