Inquiry AI
Socratic Mission Compare Fractions with Unlike Denominators

Pie Slice Comparer

This interactive mission for 4th Grade focuses on building deep conceptual understanding of Compare Fractions with Unlike Denominators. Follow the AI-guided steps to master the logic behind the numbers.

Grade 4 · Compare Fractions with Unlike Denominators

Pie Slice Comparer

Mission Progress

0/3

Thinking Summary · Step 1

Mastered

[object Object]

[Discovery] On the bar, build the LARGER pie fraction. Between 7/12 and 11/18, the larger is 11/18 — partition into 18 and shade 11.

Step 1

Active Step

[Discovery] On the bar, build the LARGER pie fraction. Between 7/12 and 11/18, the larger is 11/18 — partition into 18 and shade 11.

Partition Lab

Split the whole into equal parts

1
Target11/18
Current0/1

Mastery Expansion

View Topic Hub →

Common Questions

Everything you need to know about the Socratic experience.

How do I solve the first step of "Pie Slice Comparer"?

On the bar, build the LARGER pie fraction. Between 7/12 and 11/18, the larger is 11/18 — partition into 18 and shade 11. Hint: Cut the bar into 18 equal pieces, then shade 11.

What does the final step of "Pie Slice Comparer" check?

Which strategy made the comparison clearest? If you get stuck, the adaptive hint is: For 7/12 vs 11/18, the cleanest path was: Common denominator.

Why is this mission classified as challenger?

Challenger missions push beyond CCSS expectations with edge cases that surface deeper misconceptions. Within Grade 4 Compare Fractions with Unlike Denominators, expect numbers in the corresponding range.

What's a common mistake in Grade 4 Compare Fractions with Unlike Denominators that this mission targets?

Assuming 1/4 < 1/8 because 4 < 8 (transferring whole-number thinking to denominators). Bigger denominator means MORE pieces, so each piece is SMALLER. Use side-by-side bars: 1/8 is visibly smaller than 1/4.

What should I learn after Pie Slice Comparer?

Add Same-Denominator Fractions (Same-denominator addition is what makes comparison-by-common-denominator possible.) Open /grade-4/addfractions to start that topic's missions.

Is Inquiry AI Common Core aligned?

Yes. Every mission, handbook page, and topic hub is mapped to a specific CCSS code (visible in the page header). The curriculum follows the CCSS coherence map: Grade 1 number sense → Grade 3 multiplicative thinking → Grade 6 ratio reasoning, with each grade building strictly on the prior year's foundations.

What is inquiry-based learning, and how does Inquiry AI apply it?

Inquiry-based learning starts with a question, not a formula — students explore, hypothesize, and verify before being told the rule. In Inquiry AI, every mission opens with a "Discovery" step (manipulate the model), then "Abstraction" (write the equation), then "Reflect" (apply to a new case). The procedure is never given upfront; learners derive it from their own observations.

How is Guided Discovery Learning different from "just letting kids figure it out"?

Pure discovery is inefficient — kids hit a wall and quit. Guided Discovery scaffolds the path: a careful sequence of questions, models, and adaptive hints leads the learner toward the insight without revealing it. Inquiry AI's hint system fires automatically after ~15s of hesitation or on the first mistake, escalating from a Socratic nudge to a worked example only when needed. Mistakes are diagnosed via "misconception keys" so the hint matches the actual wrong-thinking pattern.

What does it mean for a math platform to be "Socratic"?

Socratic teaching answers a question with a better question. Instead of "the answer is 12", the system asks "if you had 3 groups of 4, how could you skip-count?" The goal is to externalize the learner's reasoning so they hear themselves think. Every Inquiry AI hint follows this pattern: nudge → reframe → analogy → only then a worked example, in that order.

What is the Concrete-Pictorial-Abstract (C-P-A) approach?

C-P-A is the Singapore Math sequence proven to deepen number sense: first manipulate physical objects (Concrete), then draw pictures of them (Pictorial), and only then write equations (Abstract). Inquiry AI structures every mission as exactly these three steps — a manipulative, a picture/grid model, and finally the equation. Skipping straight to symbols is the #1 cause of math anxiety; the platform refuses to do it.

Why does Inquiry AI let kids "struggle" before showing the answer?

Research on "productive struggle" shows that 20–60 seconds of focused effort BEFORE help dramatically improves long-term retention — the brain encodes the strategy more deeply. Inquiry AI's hint timing is calibrated to this window: short enough to prevent frustration, long enough to lock in the learning. Parents can adjust the threshold in settings if a learner needs faster scaffolding.