Part 2 of our workshop outcomes covered open research questions (part 1 looked at pros and cons of the just in time approach).
Open Research Questions
Based on the papers presented and ongoing discussions, the participants derived the following list:
- If what’s important won’t be missed ⇒ What’s missed is not important. Does this relationship hold?
- How do we do resource allocation (e.g., budget, $) in the face of JIT Reqs.
- What’s a “small” (how much; how much is “big”) initial investment? (Principle #2)
- Tradeoff between adaptability and building something that you don’t really need (YAGNI — you aren’t gonna need it)
- How can you tell something is important without refining it? (e.g., slicing architecturally significant requirements; bucket architecture into 2-3 week period)
- Don’t know the unknowns ([quality/NFRs]). Would a reqs taxonomy be useful here?
- Can you do JIT RE on NFRs (e.g., security)?
- Is traceability different in JITRE? Traceability assumes one has complete artifacts for both requirements and (code/design/arch) to trace between.
- How consistently are the labels like “enhancement”, “major improvement”, “new feature”, “user story” used? (cultural variances, agile versus OSS). Can we rely on these for research data?
- Metrics are rich for coding, testing, but not for reqs/RE. How significant is the lack of RE metrics?
- Spikes as a way of advancing (experimenting) and deriving or eliciting new requirements.
- “fail early, fail often” branching/forking
- Relationships between concepts at an abstract level: Agile, time-constrained, just-in-time RE?
- e.g., agile is not necessarily time-constrained