capturing human ai complementarity computer science

Capturing Human–AI Complementarity in Computer Science

Beyond Vibing in Computer Science: capturing human–AI complementarity skills you can share and collaborate on

Most of us start by “vibing” with AI: try a few prompts, take what looks good, ship it. Fine for tinkering, weak for teams. If you want repeatable quality, knowledge sharing, and real collaboration, make your practice visible.

1) Name the purpose before you start

Write one sentence: why use AI here instead of a simpler method? If you cannot justify it, build the non-AI baseline first. This builds ethical agency and avoids ritual prompting—copy-pasting “magic” prompts without matching them to the task.

2) Make inputs inspectable

Log what went in: sources, constraints, exact prompts/configs, and redactions. Version them. Future you (and your reviewers) need to see the setup, not just the output.

3) Don’t accept. Inspect.

Decide checks before generation. Keep a tiny test set, add one adversarial case, and record what you kept, changed, or removed. You are training professional judgment.

4) Document human vs. AI roles

Write the handoff: what AI proposes, what you approve, where you override. Role clarity turns assistance into accountable collaboration.

5) Capture the human value add

Note where you contributed sensemaking: prioritization, plain-language explanation, audience fit, accessibility. That is your professional signature.

6) Treat trade-offs like design

Record constraints you chose to live with: correctness, latency, cost, maintainability. Add a one-line rationale. Great teams trade openly, not silently.

7) Close the loop

Log one real issue, the fix, and how you will shift the human–AI split next time. End with three next actions. Small cycles build adaptive expertise.


A tiny block you can drop in any README or pull request

  • Purpose
  • Inputs (sources, prompt/config link)
  • Checks run
  • Human ↔ AI roles (handoff, overrides)
  • Trade-offs chosen
  • Human value add (sensemaking, clarity, accessibility)
  • Learning and next actions (3 bullets)

REACT 7 Questions reflection journal is intended to be used to explicitly document human–AI complementarity and ways of working strengthening computer science students’ adaptive expertise, increase value capture, knowledge sharing, and collaboration compared with freeform reflection


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

en_CAEnglish