Compare commits

...

3 commits

Author SHA1 Message Date
682227333c removed extreme delay from 8x8 mult 2025-02-04 11:04:11 +01:00
fe7bf800c4 add randomized dims results 2025-02-04 11:04:11 +01:00
Andreas Steininger
48a5013ab5 Update on Overleaf. 2025-02-04 10:04:09 +00:00
3 changed files with 4185 additions and 4175 deletions

View file

@ -376,11 +376,21 @@ To determine the performance of both our simulation environment, as well as our
\label{fig:res/sim_scaling}
\end{figure}
In our testing, this setup has shown itself quite capable as a cluster simulation tool. When running a batch of 13317 simulations, we measured a total execution time of 1 minute and 32 seconds, when executing on 4 nodes of 4 jobs each. This equates to almost exactly 9 simulations per second per core, which is in large part due to \texttt{actsim}'s high performance.
In our testing, \texttt{action} has shown itself quite capable as a cluster simulation tool. When running a batch of 13317 simulations, we measured a total execution time of 1 minute and 32 seconds, when executing on 4 nodes of 4 jobs each. This equates to almost exactly 9 simulations per second per core, which is in large part due to \texttt{actsim}'s high performance.
Failure distributions for the examined circuits can be seen in \cref{fig:results/aggregated}. The simulation configuration shown in this graph was set to a lower assumed hit-probability ($0.1$, otherwise identical to Table~\ref{tab:setup/config}), to increase the number of simulations in hopes of establishing an accurate baseline. From there, varying hit-probability, assumed failure modes per fanout, and coverage certainty ultimately equates to a difference in injections per selected signal.
\textcolor{red}{ST: We could add text about scalability here}
\cref{fig:res/deviation_num_sims_dims,fig:res/deviation_num_sims_nclx} show how observed failure mode distribution changes when the number of injections per signal is decreased. We observe that for both logic families, deviation is less than a single percentage point when going from over 20000 simulations down to about 3000.
With respect to the proposed fault-injection setup, one key feature to observe is which number of injections is actually needed to obtain statistically stable results. This can be seen as a measure of how well targeted the parameter selection is.
Failure distributions for the examined circuits can be seen in \cref{fig:results/aggregated}.
\textcolor{red}{ST: do we really have such a low proportion of "nothing"? I can hardly believe that. Same for the high proportion of "timing".}
The simulation configuration shown in this graph assumed a lower hit-probability ($0.1$, otherwise identical to Table~\ref{tab:setup/config}), to increase the number of simulations with the intention to establish an accurate baseline. From there, varying hit-probability, assumed failure modes per fanout, and coverage certainty ultimately equates to a difference in injections per selected signal.
\cref{fig:res/deviation_num_sims_dims,fig:res/deviation_num_sims_nclx} show how observed failure mode distribution changes when the number of injections per signal is decreased. We observe that for both logic styles deviation is less than a single percentage point when going from over 20000 simulations down to about 3000.
\textcolor{red}{ST: Probably more interesting would be a comparison of 4x4 and 8x8 multiplier, as these have}
As a similar exercise, we established another baseline test for varying the percentage of signals to be targeted. For this, we configured the injection-engine to select all signals, then gradually lowered the percentage of selected signals (see Figure~\ref{fig:res/deviation_sel_signals_dims} and \ref{fig:res/deviation_sel_signals_nclx}). While not as stable as number of simulations, deviation still stayed within limits down to about 50\%, with the number of glitches observed in \acs{nclx} deviating by about $2.9$ points.

Binary file not shown.

File diff suppressed because it is too large Load diff