Developing mathematical skills requires the opportunity to practice and receive immediate, individualized feedback on the misconceptions or mistakes made in the problem-solving process. Huge progress has been made in the last years in the design of feedback systems for fundamental math education. Applied mathematics education for engineering disciplines, however, lacks a large body of examples with pre-worked solution paths and known difficulty, which are necessary for providing learners with (semi-)automated feedback. This is mostly due to the need for domain-specific and situated tasks, which are not that widely deployable as generic items. The effort required for designing appropriate items, validating them in terms of the appropriateness for specific learning outcomes, and calibrating their difficulty cannot be borne by individual teachers and is also hardly justifiable for commercial providers of item pools. In this paper, we strive to show how these challenges can be addressed via a community approach to item design and calibration, supported by the methods from the computerized adaptive testing field.