Compare commits

...

3 commits

Author SHA1 Message Date
2416a8e49e added graphs and results, conclusion 2025-02-04 04:54:54 +01:00
a3f88e2129 added command to pull info out 2025-02-04 04:54:54 +01:00
florian.huemer
d977d9be64 Update on Overleaf. 2025-02-04 03:53:51 +00:00
17 changed files with 260 additions and 60 deletions

View file

@ -14,6 +14,8 @@
\usepackage{subcaption}
\usepackage{pgfplots}
\usepackage{pgfplotstable}
\pgfplotsset{compat=1.18}
\usepackage{tabularx}
% Tikz because graphs are fun
\usepackage{tikz}
\usepackage{tikz-timing}
@ -22,7 +24,6 @@
\usetikzlibrary{positioning}
\makeglossaries
\renewcommand{\glstreepredesc}{\hspace{3em}}
% Acronyms for the document
\newacronym{dut}{DUT}{Design Under Test}
@ -35,11 +36,22 @@
\newacronym{prs}{PRS}{Production Rule Set}
\newacronym{uvm}{UVM}{Universal Verification Method}
\newacronym{eda}{EDA}{Electronic Design Automation}
\newacronym{dims}{DIMS}{Delay-Insensitive Minterm Synthesis}
\newacronym{ncl}{NCL}{Null Convention Logic}
\newacronym{nclx}{NCLX}{\acs{ncl} Flow with Explicit Completeness}
% Simple citation required command
\newcommand{\citationneeded}{\textcolor{red}{[citation needed]}}
\newcommand{\referenceneeded}{\textcolor{red}{[reference needed]}}
\makeatletter
\newcommand{\linebreakand}{%
\end{@IEEEauthorhalign}
\hfill\mbox{}\par
\mbox{}\hfill\begin{@IEEEauthorhalign}
}
\makeatother
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\begin{document}
@ -65,7 +77,7 @@ fhuemer@ecs.tuwien.ac.at}
TU Wien\\
Vienna, Austria \\
steininger@ecs.tuwien.ac.at}
\and
\linebreakand
\IEEEauthorblockN{Rajit Manohar}
\IEEEauthorblockA{Computer Systems Lab\\
Yale University\\
@ -116,14 +128,15 @@ With this in mind, we will, after visiting the related work in Section~\ref{sec:
\texttt{action} is an addition to the ACT toolchain initially presented in \cite{manoharOpenSourceDesign}. ACT aims to be a collection of tools for an end-to-end chip design workflow. While the main focus of its tools is asynchronous designs, it is powerful enough to also map to synchronous logic families without issue \cite{vezzoliDesigningEnergyEfficientFullyAsynchronous2024}. The current version of the ACT toolflow does include a scripting environment \cite{heInteractInteractiveDesign}, it does however not contain a solution for distributed computing tasks, which would be helpful for testing and verification tasks. This is what we will address in our paper.
Focusing on our specific demo use-case, the tool presented in \cite{behalExplainingFaultSensitivity2021} is a fault-injection and fault space exploration tool, aiming to explore fault types in a given circuit. It is quite similar to the demo use-case we show in this paper. It distinguishes fault classes \emph{timing deviation}, \emph{value fault}, \emph{code fault}, \emph{glitch}, \emph{deadlock}, and \emph{token count error}, which are largely reused for this paper (more on our system model in Section \ref{sec:system_model/failures}). The core simulator used is QuestaSim (version 10.6c), which is a commercial simulation tool. To reduce the runtime of one simulation, a cluster based approach is employed to parallelize simulations over multiple machines. This tool has been designed for the \texttt{pypr} toolchain designed by the Huemer at TU Wien \cite{huemerContributionsEfficiencyRobustness2022}, a production rule based circuit description framework in Python. Notably, the system calculates the number of required injections using a system of average injection density, independently of which signal it is targeting. This is one of the main points which we will try to improve upon.\\
Focusing on our specific demo use-case, the tool presented in \cite{behalExplainingFaultSensitivity2021} is a fault-injection and fault space exploration tool, aiming to explore fault types in a given circuit. It is quite similar to the demo use-case we show in this paper. It distinguishes fault classes \emph{timing deviation}, \emph{value fault}, \emph{code fault}, \emph{glitch}, \emph{deadlock}, and \emph{token count error}, which are largely reused for this paper (more on our system model in Section \ref{sec:system_model/failures}). The core simulator used is QuestaSim (version 10.6c), which is a commercial simulation tool. To reduce the runtime of one simulation, a cluster based approach is employed to parallelize simulations over multiple machines.
This tool has been developed for the \texttt{pypr} toolchain designed by the Huemer at TU Wien \cite{huemerContributionsEfficiencyRobustness2022}, a production rule based circuit description framework in Python. Notably, the system calculates the number of required injections using a system of average injection density, independently of which signal it is targeting. This is one of the main points which we will try to improve upon.\\
% should i include work in master thesis?
An iteration of this system can be found in \cite{schwendingerEvaluationDifferentTools2022a}. While based on the same core toolflow, Schwendinger adds limited bridging logic to the ACT toolchain, using \textrm{prsim} \cite{manoharOpenSourceDesign} as an alternative simulator. This change requires low level simulation of additional logic, as certain required features were not supported by \texttt{prsim} and no extension to the core simulator code was written. This again is a major point for potential improvement.
Finally, we want to briefly touch on different fault-mitigation techniques seen in literature. \\
Bainbridge and Salisbury \cite{bainbridgeGlitchSensitivityDefense2009} talks about the basic possibilities for fault behavior in \ac{qdi} circuits. Much like \cite{behalExplainingFaultSensitivity2021}, it identifies specific scenarios which can occur when a \ac{set} is injected into a circuit. We will come back to this in Section \ref{sec:system_model/failures} as well. It then lays out basic mitigation techniques, which largely focus on either introducing some form of redundancy in the circuit or reducing the x size of the time window in which faults are converted into failure behavior (sensitivity window).
In a similar fashion, Huemer et.al \cite{huemerIdentificationConfinementFault2020} presents interlocking and deadlocking versions of a \ac{wchb}. These are also meant to reduce the sensitivity window size, as well as preventing the propagation of illegal symbols. We will use their implementations for interlocking and deadlocking \acp{wchb} in this paper (more in Section \ref{sec:experiment_setup}).
In a similar fashion, Huemer et.al \cite{huemerIdentificationConfinementFault2020} present interlocking and deadlocking versions of a \ac{wchb}. These are also meant to reduce the sensitivity window size, as well as preventing the propagation of illegal symbols. We will use their implementations for interlocking and deadlocking \acp{wchb} in this paper (more in Section \ref{sec:experiment_setup}).
% should we maybe put this a bit further up the paper? I mean we want this to be the main point, no?
@ -137,7 +150,7 @@ In a similar fashion, Huemer et.al \cite{huemerIdentificationConfinementFault202
\label{fig:tooling/architecture}
\end{figure}
\texttt{action} itself is a tool flow framework. Its main service is to provide a build system which can act both locally as well as remotely, shifting computing tasks away from the end user machine. This means that other tasks can be performed by the user or the connection to the user be interrupted while computation continues remotely without further intervention.
\texttt{action} itself is a tool flow framework. Its main service/purpose is to provide a build system which can act both locally as well as remotely, shifting computing tasks away from the end user machine. This means that other tasks can be performed by the user or the connection to the user be interrupted while computation continues remotely without further intervention.
To configure \texttt{action} for a certain task, a string of tool invocations is defined in a \emph{pipeline} file in YAML grammar. \texttt{action}, while primarily meant for use with the ACT toolchain, is at its core tool agnostic. As long as a corresponding tool adapter, as well has handling capability for used data types is provided, any tool (commercial or open source) can be invoked by it. This makes it particularly useful as a base framework for highly parallel and/or computationally intense applications. It can alleviate interaction with clustering software for every-day tasks, as only the local command line tool needs to be invoked to perform pipeline execution.
@ -242,20 +255,29 @@ where $P_{hit}$ additionally describes the probability of an injection hitting a
\label{fig:setup/testbench}
\end{figure}
To test our new tool, we ran our setup against previous results by Behal et.al \cite{behalExplainingFaultSensitivity2021}. We simulated the same multiplier circuit, using the same buffer styles as well. We however did not sweep over any form of pipeline load factor. We found the definition of the metric ambiguous, especially once the circuit contains non-linear pipelines. For this reason we have opted to exclude this metric from further testing.
To test our new tool, we simulated the same multiplier circuit as Behal et.al \cite{behalExplainingFaultSensitivity2021}. We however did not sweep over any form of pipeline load factor. We found the definition of the metric ambiguous, especially once the circuit contains non-linear pipelines. For this reason we have opted to exclude this metric from further testing. For consistency, the multiplier was generated using \texttt{pypr} \cite{huemerContributionsEfficiencyRobustness2022}, which is able to generate \acs{prs} rules in an ACT container. We elected to simulate four different versions, a four bit multiplier with unit delays (10 time steps) in \acs{dims}, a four bit multiplier with randomized delays ($\pm 5\%$, \acs{prs} node delay between 95 and 105 time steps) in \acs{dims}, an 8 bit multiplier with unit delays (10 time steps) in \acs{dims}, and a 4 bit multiplier with unit delays (10 time steps) in \acs{nclx}.
The \ac{dut} is wrapped in a \acs{uvm}-like testbench setup, which is provided by our new simulation library. Our future ambition is to enable a write once - use everywhere architecture, where wrapper code has to be written once and can then be reused arbitrarily for all tests and verification procedures. The overall architecture of the test setup can be seen in Figure \ref{fig:setup/testbench}. Since, like \acs{uvm}, asynchronous logic contrary to synchronous logic inherently uses a message passing abstraction, we do not require much additional logic in the way of sequencers or monitors to interface with the \ac{dut}. Input tokens are directly forwarded to the \ac{dut}, model, and scoreboard.
Points to talk about
\begin{itemize}
\item how much more detail should i mention about the target circuit and families? we're starting to run low on space
\item can I have the tikz graphics that were in the Behal paper for deadlocking, interlocking\dots
\end{itemize}
\section{Results}
\label{sec:results}
\begin{table}[ht]
\centering
\normalsize
\begin{tabularx}{0.4\textwidth}{|X|r|}
\hline
\textbf{Parameter} & \textbf{Default setting}\\
\hline
Hit probability & $0.8$ \\
\hline
Modes per fork & $1$ \\
\hline
Coverage certainty & $0.2$ \\
\hline
Victim coverage & $0.5$ \\
\hline
\end{tabularx}
\caption{Default generation engine configuration}
\label{tab:setup/config}
\end{table}
\begin{figure*}[htbp]
\centering
@ -282,16 +304,67 @@ Points to talk about
\caption{varying the number of selected signals}
\label{fig:res/deviation_sel_signals_dims}
\end{subfigure}
\caption{Variation of failure type rates in percentage points when}
\caption{Variation of failure type rates in \acs{dims} in percentage points when}
\label{fig:res/deviation_dims}
\end{figure*}
\begin{figure*}[htbp]
\centering
\begin{subfigure}{0.4\textwidth}
\begin{center}
\input{results/deviation_num_injections_nclx.tex}
\end{center}
\caption{varying the number of injections}
\label{fig:res/deviation_num_sims_nclx}
\end{subfigure}
%\hfill
\begin{subfigure}{0.4\textwidth}
\begin{center}
\input{results/deviation_num_signals_nclx.tex}
\end{center}
\caption{varying the number of selected signals}
\label{fig:res/deviation_sel_signals_nclx}
\end{subfigure}
\caption{Variation of failure type rates in \acs{nclx} in percentage points when}
\label{fig:res/deviation_nclx}
\end{figure*}
To determine the performance of both our simulation environment, as well as our fault-injection engine, we selected a configuration that would yield a low amount of simulations by default (see Table~\ref{tab:setup/config}) and vary configuration parameters individually from there.
\section{Results}
\label{sec:results}
\begin{figure}
\centering
\scalebox{0.85}{\input{results/sim_scaling.tex}}
\caption{Simulation scaling over configuration parameters}
\label{fig:res/sim_scaling}
\end{figure}
In our testing, this setup has shown itself quite capable as a cluster simulation tool. When running a batch of 13317 simulations, we measured a total execution time of 1 minute and 32 seconds, when executing on 4 nodes of 4 jobs each. This equates to almost exactly 9 simulations per second per core, which is in large part due to \texttt{actsim}'s high performance.
Failure distributions for the examined circuits can be seen in Figure~\ref{fig:results/aggregated}. The simulation configuration shown in this graph was set to a lower assumed hit-probability ($0.1$, otherwise identical to Table~\ref{tab:setup/config}), to increase the number of simulations in hopes of establishing an accurate baseline. From there, varying hit-probability, assumed failure modes per fanout, and coverage certainty ultimately equates to a difference in injections per selected signal.
Figure~\ref{fig:res/deviation_num_sims_dims} and Figure~\ref{fig:res/deviation_num_sims_nclx} show how observed failure mode distribution changes when the number of injections per signal is decreased. We observe that for both logic families, deviation is less than a single percentage point when going from over 20000 simulations down to about 3000.
As a similar exercise, we established another baseline test for varying the percentage of signals to be targeted. For this, we configured the injection-engine to select all signals, then gradually lowered the percentage of selected signals (see Figure~\ref{fig:res/deviation_sel_signals_dims} and \ref{fig:res/deviation_sel_signals_nclx}). While not as stable as number of simulations, deviation still stayed within limits down to about 50\%, with the number of glitches observed in \acs{nclx} deviating by about $2.9$ points.
An interesting side note here is that randomness definitely plays a factor here, as a signal coverage of $70\%$ has less deviation from reference for \acs{dims} than $90\%$. Further study would be needed to determine the variance for these values.
Finally, Figure~\ref{fig:res/sim_scaling} shows how the configuration parameters for the injection engine influence the number of calculated required injections. Coverage certainty, hit probability, and victim coverage are in a value range of 0 to 1, while expected failure modes per fanout is in range 1 to 3.\\
We see that hit probability, victim coverage, as well as expected failure modes change the number of simulations as we expected. While we expected victim coverage to grow the number of injections linearly, this was under the assumptions of all signals having identical fanout. However, since high fanout signals are selected first, we see strong initial growth, which then plateaus out as only low fanout signals are left to be selected.
\section{Conclusion}
\label{sec:conclusion}
When developing or verifying hardware, certain tasks require more resources than one local node can provide. To address this, we have presented \texttt{action}, an extendible cluster computation tool meant to augment the ACT toolchain. In addition we have provided extensions to \texttt{actsim}, which enable \acs{set} injections into a circuit, as well as a simulation library, which enables \acs{uvm}-like testing setups. This customization on a source code level in conjunction with \texttt{actsim}'s major improvements over previous simulators, like mixed-fidelity simulation, have provided exceptional simulation throughput of 9 simulations per core per second.
We have then used \texttt{action} in conjunction with \texttt{actsim} to develop a novel fault-injection engine, which we also presented in this paper. Through this engine, we were able to show low variance in failure mode distribution for our \acs{dims} and \acs{nclx} implementation of a 4 bit multiplier when reducing the number of injections. We were also able to show useful performance scaling characteristics when increasing circuit size.
\renewcommand{\IEEEiedlistdecl}{\IEEEsetlabelwidth{SONET}}
\printacronyms
\renewcommand{\IEEEiedlistdecl}{\relax}
\printbibliography

View file

@ -23,13 +23,13 @@
nodes near coords, % Option to show values near the bars
]
\addplot table[x=Circuit, y=Timing deviation] {\aggregateddata};
\addplot table[x=Circuit, y=Value failure] {\aggregateddata};
\addplot table[x=Circuit, y=Coding failure] {\aggregateddata};
\addplot table[x=Circuit, y=Glitch] {\aggregateddata};
\addplot table[x=Circuit, y=Deadlock] {\aggregateddata};
\addplot table[x=Circuit, y=Token count failure] {\aggregateddata};
\addplot table[x=Circuit, y=Nothing] {\aggregateddata};
\addplot[fill=blue] table[x=Circuit, y=Timing deviation] {\aggregateddata};
\addplot[fill=red] table[x=Circuit, y=Value failure] {\aggregateddata};
\addplot[fill=brown] table[x=Circuit, y=Coding failure] {\aggregateddata};
\addplot[fill=gray] table[x=Circuit, y=Glitch] {\aggregateddata};
\addplot[fill=purple] table[x=Circuit, y=Deadlock] {\aggregateddata};
\addplot[fill=green] table[x=Circuit, y=Token count failure] {\aggregateddata};
\addplot[fill=cyan] table[x=Circuit, y=Nothing] {\aggregateddata};
\legend{Timing deviation, Value failure, Coding failure, Glitch, Deadlock, Token count failure, Nothing}

View file

@ -1,9 +1,9 @@
Total simulations,Timing deviation,Value failure,Coding failure,Glitch,Deadlock,Token count failure,Nothing
2338,0.37,-0.81,-0.02,-1.01,0.02,-0.33,-0.50
3149,0.87,0.27,-0.40,0.14,-0.29,-0.04,-0.47
4773,0.10,-0.71,-0.53,-0.58,-0.07,-0.46,0.12
5015,-0.16,-0.01,0.54,-0.39,-0.08,-0.13,0.29
7780,0.04,-0.06,0.89,-0.37,-0.05,-0.27,-0.08
9583,0.34,0.00,0.93,-0.07,-0.09,-0.46,-0.29
13317,0.25,0.21,1.12,-0.06,-0.03,0.01,-0.42
25757,0.00,0.00,0.00,0.00,0.00,0.00,0.00
Total simulations;Timing deviation;Value failure;Coding failure;Glitch;Deadlock;Token count failure;Nothing
2338;0.37;-0.81;-0.02;-1.01;0.02;-0.33;-0.50
3149;0.87;0.27;-0.40;0.14;-0.29;-0.04;-0.47
4773;0.10;-0.71;-0.53;-0.58;-0.07;-0.46;0.12
5015;-0.16;-0.01;0.54;-0.39;-0.08;-0.13;0.29
7780;0.04;-0.06;0.89;-0.37;-0.05;-0.27;-0.08
9583;0.34;0.00;0.93;-0.07;-0.09;-0.46;-0.29
13317;0.25;0.21;1.12;-0.06;-0.03;0.01;-0.42
25757;0.00;0.00;0.00;0.00;0.00;0.00;0.00

1 Total simulations Timing deviation Value failure Coding failure Glitch Deadlock Token count failure Nothing
2 2338 0.37 -0.81 -0.02 -1.01 0.02 -0.33 -0.50
3 3149 0.87 0.27 -0.40 0.14 -0.29 -0.04 -0.47
4 4773 0.10 -0.71 -0.53 -0.58 -0.07 -0.46 0.12
5 5015 -0.16 -0.01 0.54 -0.39 -0.08 -0.13 0.29
6 7780 0.04 -0.06 0.89 -0.37 -0.05 -0.27 -0.08
7 9583 0.34 0.00 0.93 -0.07 -0.09 -0.46 -0.29
8 13317 0.25 0.21 1.12 -0.06 -0.03 0.01 -0.42
9 25757 0.00 0.00 0.00 0.00 0.00 0.00 0.00

View file

@ -1,5 +1,5 @@
\pgfplotstableread[col sep=comma]{results/deviation_num_injections_dims.csv}\deviationsnuminjectionsdims
\pgfplotstableread[col sep=semicolon]{results/deviation_num_injections_dims.csv}\deviationsnuminjectionsdims
\begin{tikzpicture}
\begin{axis}[
@ -13,12 +13,12 @@
grid=both, % Add a grid for better readability
]
\addplot table[x=Timing deviation, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot table[x=Value failure, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot table[x=Coding failure, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot table[x=Glitch, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot table[x=Deadlock, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot table[x=Token count failure, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot table[x=Nothing, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot[blue, mark=*] table[x=Timing deviation, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot[red, mark=square*] table[x=Value failure, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot[brown, mark=triangle*] table[x=Coding failure, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot[gray, mark=diamond*] table[x=Glitch, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot[purple, mark=+] table[x=Deadlock, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot[green, mark=x] table[x=Token count failure, y=Total simulations] {\deviationsnuminjectionsdims};
\addplot[cyan, mark=halfcircle] table[x=Nothing, y=Total simulations] {\deviationsnuminjectionsdims};
\end{axis}
\end{tikzpicture}

View file

@ -0,0 +1,9 @@
Total simulations;Timing deviation;Value failure;Coding failure;Glitch;Deadlock;Token count failure;Nothing
2445;0.19;-0.24;0.00;-0.37;-0.64;-0.07;-0.22
3263;0.35;0.25;0.00;0.87;-0.01;-0.09;-0.12
4943;-0.21;-0.47;0.00;-0.29;0.22;-0.24;0.29
5206;0.11;0.61;0.00;1.07;0.02;0.77;-0.05
8060;0.23;0.12;0.00;0.67;0.12;0.06;-0.23
9941;0.30;0.11;0.00;0.42;0.08;-0.04;-0.29
13765;0.00;0.12;0.00;0.63;0.06;0.00;-0.13
26807;0.00;0.00;0.00;0.00;0.00;0.00;0.00
1 Total simulations Timing deviation Value failure Coding failure Glitch Deadlock Token count failure Nothing
2 2445 0.19 -0.24 0.00 -0.37 -0.64 -0.07 -0.22
3 3263 0.35 0.25 0.00 0.87 -0.01 -0.09 -0.12
4 4943 -0.21 -0.47 0.00 -0.29 0.22 -0.24 0.29
5 5206 0.11 0.61 0.00 1.07 0.02 0.77 -0.05
6 8060 0.23 0.12 0.00 0.67 0.12 0.06 -0.23
7 9941 0.30 0.11 0.00 0.42 0.08 -0.04 -0.29
8 13765 0.00 0.12 0.00 0.63 0.06 0.00 -0.13
9 26807 0.00 0.00 0.00 0.00 0.00 0.00 0.00

View file

@ -0,0 +1,24 @@
\pgfplotstableread[col sep=semicolon]{results/deviation_num_injections_nclx.csv}\deviationsnuminjectionsnclx
\begin{tikzpicture}
\begin{axis}[
width=.95\textwidth,
xlabel={Deviation from max runs (\% points)},
ylabel={Number Simulations},
ylabel near ticks,
xmin=-2, xmax=2,
ymin=2000, ymax=26000,
scaled y ticks=false,
grid=both, % Add a grid for better readability
]
\addplot[blue, mark=*] table[x=Timing deviation, y=Total simulations] {\deviationsnuminjectionsnclx};
\addplot[red, mark=square*] table[x=Value failure, y=Total simulations] {\deviationsnuminjectionsnclx};
\addplot[brown, mark=triangle*] table[x=Coding failure, y=Total simulations] {\deviationsnuminjectionsnclx};
\addplot[gray, mark=diamond*] table[x=Glitch, y=Total simulations] {\deviationsnuminjectionsnclx};
\addplot[purple, mark=+] table[x=Deadlock, y=Total simulations] {\deviationsnuminjectionsnclx};
\addplot[green, mark=x] table[x=Token count failure, y=Total simulations] {\deviationsnuminjectionsnclx};
\addplot[cyan, mark=halfcircle] table[x=Nothing, y=Total simulations] {\deviationsnuminjectionsnclx};
\end{axis}
\end{tikzpicture}

View file

@ -1,7 +1,8 @@
Selected signals,Timing deviation,Value failure,Coding failure,Glitch,Deadlock,Token count failure,Nothing
100,0.00,0.00,0.00,0.00,0.00,0.00,0.00
90,-0.61,-1.00,-1.37,0.09,-0.38,-0.37,0.73
70,0.37,-0.81,-0.02,-1.01,0.02,-0.33,-0.50
50,0.00,0.13,-1.00,1.11,-0.19,0.16,0.46
30,0.00,0.13,-1.00,1.11,-0.19,0.16,0.46
10,0.00,0.13,-1.00,1.11,-0.19,0.16,0.46
Selected signals;Timing deviation;Value failure;Coding failure;Glitch;Deadlock;Token count failure;Nothing
100;0.00;0.00;0.00;0.00;0.00;0.00;0.00
90;-0.61;-1.00;-1.37;0.09;-0.38;-0.37;0.73
70;0.37;-0.81;-0.02;-1.01;0.02;-0.33;-0.50
50;0.00;0.13;-1.00;1.11;-0.19;0.16;0.46
30;0.14;-0.50;-0.32;1.43;0.39;0.37;-0.14
10;-1.48;-0.36;-4.48;4.55;1.70;1.58;0.41
5;1.56;0.19;-5.24;1.74;1.21;1.36;-1.94

1 Selected signals Timing deviation Value failure Coding failure Glitch Deadlock Token count failure Nothing
2 100 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 90 -0.61 -1.00 -1.37 0.09 -0.38 -0.37 0.73
4 70 0.37 -0.81 -0.02 -1.01 0.02 -0.33 -0.50
5 50 0.00 0.13 -1.00 1.11 -0.19 0.16 0.46
6 30 0.00 0.14 0.13 -0.50 -1.00 -0.32 1.11 1.43 -0.19 0.39 0.16 0.37 0.46 -0.14
7 10 0.00 -1.48 0.13 -0.36 -1.00 -4.48 1.11 4.55 -0.19 1.70 0.16 1.58 0.46 0.41
8 5 1.56 0.19 -5.24 1.74 1.21 1.36 -1.94

View file

@ -1,24 +1,24 @@
\pgfplotstableread[col sep=comma]{results/deviation_num_signals_dims.csv}\deviationsnumsignalsdims
\pgfplotstableread[col sep=semicolon]{results/deviation_num_signals_dims.csv}\deviationsnumsignalsdims
\begin{tikzpicture}
\begin{axis}[
width=.95\textwidth,
xlabel={Deviation from max runs (\% points)},
ylabel={Number Simulations},
ylabel={Signal Coverage (\%)},
ylabel near ticks,
xmin=-2, xmax=2,
xmin=-3, xmax=3,
ymin=0, ymax=100,
scaled y ticks=false,
grid=both, % Add a grid for better readability
]
\addplot table[x=Timing deviation, y=Selected signals] {\deviationsnumsignalsdims};
\addplot table[x=Value failure, y=Selected signals] {\deviationsnumsignalsdims};
\addplot table[x=Coding failure, y=Selected signals] {\deviationsnumsignalsdims};
\addplot table[x=Glitch, y=Selected signals] {\deviationsnumsignalsdims};
\addplot table[x=Deadlock, y=Selected signals] {\deviationsnumsignalsdims};
\addplot table[x=Token count failure, y=Selected signals] {\deviationsnumsignalsdims};
\addplot table[x=Nothing, y=Selected signals] {\deviationsnumsignalsdims};
\addplot[blue, mark=*] table[x=Timing deviation, y=Selected signals] {\deviationsnumsignalsdims};
\addplot[red, mark=square*] table[x=Value failure, y=Selected signals] {\deviationsnumsignalsdims};
\addplot[brown, mark=triangle*] table[x=Coding failure, y=Selected signals] {\deviationsnumsignalsdims};
\addplot[gray, mark=diamond*] table[x=Glitch, y=Selected signals] {\deviationsnumsignalsdims};
\addplot[purple, mark=+] table[x=Deadlock, y=Selected signals] {\deviationsnumsignalsdims};
\addplot[green, mark=x] table[x=Token count failure, y=Selected signals] {\deviationsnumsignalsdims};
\addplot[cyan, mark=halfcircle] table[x=Nothing, y=Selected signals] {\deviationsnumsignalsdims};
\end{axis}
\end{tikzpicture}

View file

@ -0,0 +1,8 @@
Selected signals;Timing deviation;Value failure;Coding failure;Glitch;Deadlock;Token count failure;Nothing
100;0.00;0.00;0.00;0.00;0.00;0.00;0.00
90;-0.12;0.40;0.00;1.03;-0.09;0.49;0.16
70;-0.08;0.26;0.00;1.47;-0.29;0.30;0.20
50;0.19;0.73;0.00;2.92;0.05;0.28;-0.03
30;0.37;1.57;0.00;3.10;-0.36;1.41;-0.58
10;0.21;-0.33;0.00;4.98;0.31;1.02;-0.22
5;-0.02;2.61;0.00;9.49;0.09;5.99;-0.36
1 Selected signals Timing deviation Value failure Coding failure Glitch Deadlock Token count failure Nothing
2 100 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3 90 -0.12 0.40 0.00 1.03 -0.09 0.49 0.16
4 70 -0.08 0.26 0.00 1.47 -0.29 0.30 0.20
5 50 0.19 0.73 0.00 2.92 0.05 0.28 -0.03
6 30 0.37 1.57 0.00 3.10 -0.36 1.41 -0.58
7 10 0.21 -0.33 0.00 4.98 0.31 1.02 -0.22
8 5 -0.02 2.61 0.00 9.49 0.09 5.99 -0.36

View file

@ -0,0 +1,24 @@
\pgfplotstableread[col sep=semicolon]{results/deviation_num_signals_nclx.csv}\deviationsnumsignalsnclx
\begin{tikzpicture}
\begin{axis}[
width=.95\textwidth,
xlabel={Deviation from max runs (\% points)},
ylabel={Signal Coverage (\%)},
ylabel near ticks,
xmin=-3, xmax=3,
ymin=0, ymax=100,
scaled y ticks=false,
grid=both, % Add a grid for better readability
]
\addplot[blue, mark=*] table[x=Timing deviation, y=Selected signals] {\deviationsnumsignalsnclx};
\addplot[red, mark=square*] table[x=Value failure, y=Selected signals] {\deviationsnumsignalsnclx};
\addplot[brown, mark=triangle*] table[x=Coding failure, y=Selected signals] {\deviationsnumsignalsnclx};
\addplot[gray, mark=diamond*] table[x=Glitch, y=Selected signals] {\deviationsnumsignalsnclx};
\addplot[purple, mark=+] table[x=Deadlock, y=Selected signals] {\deviationsnumsignalsnclx};
\addplot[green, mark=x] table[x=Token count failure, y=Selected signals] {\deviationsnumsignalsnclx};
\addplot[cyan, mark=halfcircle] table[x=Nothing, y=Selected signals] {\deviationsnumsignalsnclx};
\end{axis}
\end{tikzpicture}

Binary file not shown.

40
results/sim_scaling.tex Normal file
View file

@ -0,0 +1,40 @@
\pgfplotsset{width=7cm,compat=1.3}
\begin{tikzpicture}
\pgfplotsset{
scale only axis,
ymin=0, ymax=26000,
scaled y ticks=false,
}
\begin{axis}[
xmin=0, xmax=1,
axis x line*=bottom,
% xlabel=x-axis,
ylabel=Number of simulations,
]
\addplot[mark=x,red] table[x=Coverage certainty, y=Simulations, col sep=semicolon] {results/sim_scaling_dims_coverage_certainty.csv}; \label{scaling-plot-cov-cert}
\addplot[mark=o,green] table[x=Hit probability, y=Simulations, col sep=semicolon] {results/sim_scaling_dims_hit_prob.csv}; \label{scaling-plot-hit-prob}
\addplot[mark=x,blue] table[x=Victim coverage, y=Simulations, col sep=semicolon] {results/sim_scaling_dims_victim_coverage.csv}; \label{scaling-plot-vic-cov}
\end{axis}
\begin{axis}[
axis x line*=top,
axis y line=none,
xmin=0, xmax=4,
% xlabel=x-axis 2
]
\addlegendimage{/pgfplots/refstyle=scaling-plot-cov-cert}\addlegendentry{Coverage certainty}
\addlegendimage{/pgfplots/refstyle=scaling-plot-hit-prob}\addlegendentry{Hit probability}
\addlegendimage{/pgfplots/refstyle=scaling-plot-vic-cov}\addlegendentry{Victim coverage}
\addplot[smooth,mark=*,brown] table[x=Modes, y=Simulations, col sep=semicolon] {results/sim_scaling_dims_modes.csv};
\addlegendentry{Failure modes}
\end{axis}
\end{tikzpicture}

View file

@ -0,0 +1,5 @@
Coverage certainty;Simulations
0.2;2338
0.4;3149
0.6;4773
0.8;9583
1 Coverage certainty Simulations
2 0.2 2338
3 0.4 3149
4 0.6 4773
5 0.8 9583

View file

@ -0,0 +1,4 @@
Hit probability;Simulations
0.1;25757
0.5;5015
0.8;3149
1 Hit probability Simulations
2 0.1 25757
3 0.5 5015
4 0.8 3149

View file

@ -0,0 +1,4 @@
Modes;Simulations
1;3149
2;7780
3;13317
1 Modes Simulations
2 1 3149
3 2 7780
4 3 13317

View file

@ -0,0 +1,8 @@
Victim coverage;Simulations
1;4025
0.9;3849
0.7;3497
0.5;3149
0.3;2510
0.1;1314
0.05;609
1 Victim coverage Simulations
2 1 4025
3 0.9 3849
4 0.7 3497
5 0.5 3149
6 0.3 2510
7 0.1 1314
8 0.05 609

View file

@ -16,7 +16,7 @@ prepare:
hit-probability: 0.8
modes-per-fork: 1
coverage-certainty: 0.4
victim-coverage: 1
victim-coverage: 0.05
injection-windows:
- begin: 240
end: 1200
@ -35,5 +35,5 @@ deploy:
sim_outputs: sim_result
top: tb
# select count (*) from sim_outputs where (fault_flags & B'100000') = B'100000';
# select count (*) from sim_outputs where (fault_flags & B'000001') = B'000001'; select count (*) from sim_outputs where (fault_flags & B'000010') = B'000010'; select count (*) from sim_outputs where (fault_flags & B'000100') = B'000100'; select count (*) from sim_outputs where (fault_flags & B'001000') = B'001000'; select count (*) from sim_outputs where (fault_flags & B'010000') = B'010000'; select count (*) from sim_outputs where (fault_flags & B'100000') = B'100000'; select count (*) from sim_outputs where (fault_flags) = B'000000';
# update sim_configs set part_status = 'not_started' where part_status = 'finished' and not exists (select 1 from sim_outputs so where so.sim_config = sim_configs.id);