%&tex \chapter{Design Method Evaluation} \label{chap:reflection} \section{Elements of a Feature} Over the course of this study, the definition of a feature evolved into requirements, components and functions. As explained in the background chapter, having an approach to determine requirements is a crucial concept of a design process. Because \ac{ridm} did not include such an approach, a \ac{se} approach was added. The aim of the \ac{se} approach is to deliver a set of features to be used in the \ac{ridm}. To be more specific, the set of features was expected to be the result of the feature definition step. Contrary to that expectation, multiple attempts for this step did not produce a satisfactory definition of features. As explained in \autoref{sec:case_featuredefinition}, there was a clear discrepancy between the expected and resulting features. It was expected to get features in the form of components that developed during the design process. However, the resulting features came off as functions of the system. In the end, a solution was found in the RobMoSys approach. Even though the RobMoSys approach was too comprehensive for this case study, it provided the basis for the split between functions and components. Furthermore, it resulted in the hierarchical structure of functions and sub-functions as shown in \autoref{fig:robmosys}. \begin{figure} \centering \includegraphics[width=85mm]{graphics/functional_relation.pdf} \caption{Relations and elements within a feature. \autocite{kordon_model-based_2007}} \label{fig:functional_relation} \end{figure} Creating a hierarchy for the functions and a separate set of components allowed for the continuation of the case study. There were still a number of challenges with this approach. For example, it was almost impossible to divide the requirements between components and functions. Furthermore, the role of electronics did not fit in the current approach either. In reviewing the literature, the approach used in this case study shows clear resemblances with \ac{mbed} \autocite{kordon_model-based_2007}. \ac{mbed} introduces explicit relations between the requirements, components and functions, as shown in \autoref{fig:functional_relation}. Additionally, the paper includes a layout for the hierarchy of requirements, functions and components. Based on this, the approach by \textcite{kordon_model-based_2007} further supports the idea of dividing features into requirements or requirements, functions, and components. What is interesting about this new insight is that it helps to understand the difference with the case study performed by \textcite{broenink_rapid_2019}. The hardware components used by Broenink and Broenink was a mini-segway, which was designed to be used in a under-graduate practical. The requirement for this mini-segway to be able to balance, drive and steer, is inherited directly from the student project. Causing the requirements and components to be implicitly defined in their case study. Therefore, the function that needs to be implemented, fits very well within the definition of a feature. \section{Model and Design Relation} \label{sec:evaluation_model_and_design} The \ac{ridm} as well as the design method in this study do not make an explicit distinction between the model and the design. This implicitly causes the model itself to dictate the design. According to \textcite{stachowiak_allgemeine_1973}, three general properties apply for a model. First is that the model is always representative to its original; second, the model must only include attributes of its original that are relevant to the respective developer or user; and third, the model must be pragmatic to the original, meaning that models are an adaptation of the original with respect to a specific purpose. These properties coincide with the different modelling approaches used during the case study. The dynamic models did not start directly with 3D physics as it would conflict with the second property. However, as the design becomes more refined, it can not be represented with only basic kinematics calculations. The step to 2D, and later 3D physics, is made such that the model still represents the design. Parallel to the dynamics model, a CAD drawing was used to model the shape of the hardware components. Simply because models represent the design for a different purpose. Even though the models in the case study satisfy the properties as described above, it has a significant implication for the current design method. As the model is used to represent the current design, switching to a different modelling approach changes the representation of the design as well. Two direct consequences are identified from the case study. The first is that there is discrepancy for the required effort between a design change and the corresponding model update. This is seen in the case study when the model was reconstructed with 3D physics but the design did not change. Resulting in a couple of days of work spend reconstructing the model, without significant progress in the design. The second consequence is that the design got split up over the dynamics model and the CAD drawing: Both included the kinematics of the \ac{scara}; the controller and stepper behavior was defined in the dynamics model; and the shapes of the components was defined in the CAD drawing. This organization of models and design has two major downsides. The first that a switch in modelling approach, is not only labor intensive, it is error prone as well. Copying parameters from one model and pasting them in another model is an unwanted practice. The second problem is that not every type of model can represent the same information. Although the CAD drawing contains a lot of detail, it cannot represent the electric behavior of the motors. For this case study the motors are \ac{ots}-components. The electric behavior of the motor is therefore represented by a product number. However, this is not applicable when the design requires custom motors that are designed with a specific electric behavior. Probably resulting in a situation where specific design details are spread over different sub-models. Such a subdivision of details across different models is, without any doubt, undesired. \section{System Complexity} \autoref{sec:time_investment} explains the time resources required for the development of the software in the system. Even though the focus was creating a hardware focussed solution for the "Tweet on a whiteboard", the complexity of the software required for this system was underestimated. \textcite{royce_managing_1970} also acknowledges this difference in complexity for soft- and hardware. He expects 50 pages of software documentation for each page of hardware documentation in projects with comparable budget. Although the focus was on complex hardware solution, this solution was only possible with the use of software. The interaction between the \ac{scara} and \ac{cdc} is only possible with software that can switch states. Furthermore, the path planning used to write characters on the board is completely software dependent as well. \textcite{sheard_718_1998} discusses that pure-hardware solution are relatively simple in their problem-space perspective. However, the hardware solution is often complex in the solution space perspective. And indeed, during the initial design in the case study, the choice was made for the most complex hardware solution. But without software, the \ac{scara} and \ac{cdc} have no functionality. Another point on system complexity is prototyping. Because hardware tends to be relatively simple, building a hardware prototype such as the \ac{scara} is cheap and quick. An initial hardware prototype is easily constructed with \ac{ots} readily available. Because the hardware transfers power the interfacing between components is straight forward. For example, linear actuation can be achieved with a rack and pinion construction, linear motor, gear and chain link, or a connecting rod. This might not be part of the final product, but it is useful to investigate the feasibility of the project. Furthermore, the changes are also easily made to hardware. It is possible to weld or glue new parts on or remove them with the angle grinder. Adding components to software is tedious and can lead to unwanted behavior. However, this is difficult to test because the software is more complex. Moreover, unwanted behavior of the hardware is discoverable, and when it breaks it is often destructive. The software can run for multiple days before crashing, as a result of integer, stack or buffer overflows for example. As long as the development is still in progress, one hardware system is more malleable than the software in terms of resources. When the production of a product starts, changing multiple hardware systems becomes economically unviable. A design method for \ac{cps} must acknowledge that software has a high \emph{cost of change} and has also a high \emph{chance of failure}. Additionally, the design method must use the hardware prototype low \emph{cost of change} to its advantage. \section{Preparation Phase} The start of this chapter explains the reason to prepend the preparation phase to the \ac{ridm}. Where the preparation phase aims to produce the requirements and features, based on the waterfall method. However, during the case study, the waterfall method proved to be problematic. Especially during the first steps, the amount of information was scarce, which made it tempting to work ahead. For example, a simple proof of concept during the requirement step would have resulted in valuable information. This was however, not possible as the goal was to follow the specified design method as close as possible. Looking at the current case study where the system under design is relatively simple, more design experience is sufficient to overcome the information shortage. Unfortunately, it requires experienced developers, which are scarce by themself. As was pointed out in \autoref{sec:evaluation_reflection_protoype}, perceiving the current design as a prototype would also improve the information situation. Similarly, \textcite{royce_managing_1970} proposed to use a prototype in order to reduce the reliance on human judgment. A common denominator of these proposals is that they all deal with the dependency on human judgement, either by improving or reducing this judgement. Nonetheless, these proposal seem like a suppression of symptoms, instead of an actual improvement of the design method. Interestingly, when the current design is regarded as prototype and the design method is repeated, the approach is comparable with the first cycle of the spiral model \autocite{boehm_spiral_1988}. \textcite{broenink_rapid_2019} state about the \ac{ridm} that the development cycle is based, among other methods, on the spiral model. It may be the case therefore that prepending the waterfall model was an attempt to reinvent the wheel. \section{Rapid Iterative Design Method} This chapter began by a breakdown of the elements of a feature, argued the importance of distinction between design and model, and explained the need for an integrated preparation phase. The commonality between these three issues is that they all stem from the rapid development cycle, which was introduced in \autoref{sec:background_rdc} as part of the \ac{ridm}. It is apparent that the current implementation of the rapid development cycle is not suited for the design of a cyber-physical system. Further studies, which take these issues into account, should be undertaken. Even though, these issues have a large impact on the overall performance, they must not overshadow the rest of the design method. The feature selection step and variable detail approach did show a positive impact on the design method. The following sections discuss their performance and what potential impact an improved rapid development cycle introduces. \subsection{Feature Selection} The goal of the rapid development is to process a list of features into a competent model. In this case, the list of features was produced in the preparation phase. The features are then, one by one, implemented according to the variable detail approach. To determine the order of feature implementation, I specified a feature selection protocol, which is explained in \autoref{sec:feature_selection}. Based on this case study, the feature selection is a suitable addition to the design method. Especially for the failed feature implementation as described in \autoref{sec:case_development_cycle_1}. Would the SCARA have been implemented first, a failure in the end-effector might result in a required redesign of the SCARA feature. However, with only two uses during this case study, caution must be applied. On the criteria itself, the time-risk factor and the dependees are, in my opinion, most relevant. The dependees is a hard criteria: if there are any features that it depends on, but not yet implemented, it is cannot be selected. Otherwise the feature would be implemented before the required information is available. As explained in \autoref{sec:case_development_cycle_1}, the feature selection approach aims to clear the largest amount risk the smallest amount possible. However, between the dependees and the risk-time factor, there is a criteria for the number of tests, which could hinder this approach. The current approach would result in the situation where a feature with lots of easy to pass tests, is implemented before a features with less, but more difficult tests. It is then possible to spend a lot of time on something that is very likely to pass anyway. This does not alter the fact that to complete the design the all tests have to pass. Which is also the reason for this criteria in the first place: give priority to the feature that passes the most tests on completion. Even though it is difficult to draw concrete conclusions about the feature selection, a recommendation is to use the number and risk of tests as a metrics for the risk-time factor calculation. In addition, it is possible that other metrics improve the risk time calculation, such as the number of dependees, the number of tests of those dependees, or other metrics that aid the risk assessment. Further work is required to establish which metrics are suitable to improve the risk calculation. \subsection{Variable Detail Approach} The variable detail approach is be a very practical development tool. A note of caution is due here since the variable detail approach has not been used to its full potential. The goal was to add detail to a feature in strictly defined steps. Between each step the tests are applied to the updated model. Based on the test, the development continues or the model is rolled back to an earlier version. In addition, the models, independent of the level of detail, can be reused in other models. However, multiple difficulties were encountered during the case study, which hindered the variable detail approach. As was mentioned in \autoref{sec:evaluation_reflection_development}, the lack of good version control made it difficult to work with multiple versions of a model. This made it difficult to switch or revert to other levels of detail. However, the greatest difficulty is due to the model representing the design, as discussed in \autoref{sec:evaluation_model_and_design}. Because the design contains a certain level of detail and the model is a full representation of the design, it is difficult to make a simple implementation or to switch back. This strong relation between the model and the design, also caused the complete model to be switched to a different representation. Even though the variable detail approach did not perform as planned, I expect this approach to be a very strong part of the design method, given that a solution is found to the problems described above.