2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FTRACE_NMI_ENTER
15 See Documentation/trace/ftrace-design.txt
17 config HAVE_FUNCTION_TRACER
20 See Documentation/trace/ftrace-design.txt
22 config HAVE_FUNCTION_GRAPH_TRACER
25 See Documentation/trace/ftrace-design.txt
27 config HAVE_FUNCTION_GRAPH_FP_TEST
30 See Documentation/trace/ftrace-design.txt
32 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
35 See Documentation/trace/ftrace-design.txt
37 config HAVE_DYNAMIC_FTRACE
40 See Documentation/trace/ftrace-design.txt
42 config HAVE_FTRACE_MCOUNT_RECORD
45 See Documentation/trace/ftrace-design.txt
47 config HAVE_SYSCALL_TRACEPOINTS
50 See Documentation/trace/ftrace-design.txt
52 config TRACER_MAX_TRACE
58 config FTRACE_NMI_ENTER
60 depends on HAVE_FTRACE_NMI_ENTER
64 select CONTEXT_SWITCH_TRACER
67 config CONTEXT_SWITCH_TRACER
70 config RING_BUFFER_ALLOW_SWAP
73 Allow the use of ring_buffer_swap_cpu.
74 Adds a very slight overhead to tracing when enabled.
76 # All tracer options should select GENERIC_TRACER. For those options that are
77 # enabled by all tracers (context switch and event tracer) they select TRACING.
78 # This allows those options to appear when no other tracer is selected. But the
79 # options do not appear when something else selects it. We need the two options
80 # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
81 # hiding of the automatic options.
87 select STACKTRACE if STACKTRACE_SUPPORT
98 # Minimum requirements an architecture has to meet for us to
99 # be able to offer generic tracing facilities:
101 config TRACING_SUPPORT
103 # PPC32 has no irqflags tracing support, but it can use most of the
104 # tracers anyway, they were tested to build and work. Note that new
105 # exceptions to this list aren't welcomed, better implement the
106 # irqflags tracing for your architecture.
107 depends on TRACE_IRQFLAGS_SUPPORT || PPC32
108 depends on STACKTRACE_SUPPORT
115 default y if DEBUG_KERNEL
117 Enable the kernel tracing infrastructure.
121 config FUNCTION_TRACER
122 bool "Kernel Function Tracer"
123 depends on HAVE_FUNCTION_TRACER
126 select GENERIC_TRACER
127 select CONTEXT_SWITCH_TRACER
129 Enable the kernel to trace every kernel function. This is done
130 by using a compiler feature to insert a small, 5-byte No-Operation
131 instruction at the beginning of every kernel function, which NOP
132 sequence is then dynamically patched into a tracer call when
133 tracing is enabled by the administrator. If it's runtime disabled
134 (the bootup default), then the overhead of the instructions is very
135 small and not measurable even in micro-benchmarks.
137 config FUNCTION_GRAPH_TRACER
138 bool "Kernel Function Graph Tracer"
139 depends on HAVE_FUNCTION_GRAPH_TRACER
140 depends on FUNCTION_TRACER
141 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
144 Enable the kernel to trace a function at both its return
146 Its first purpose is to trace the duration of functions and
147 draw a call graph for each thread with some information like
148 the return value. This is done by setting the current return
149 address on the current task structure into a stack of calls.
152 config IRQSOFF_TRACER
153 bool "Interrupts-off Latency Tracer"
155 depends on TRACE_IRQFLAGS_SUPPORT
156 depends on GENERIC_TIME
157 select TRACE_IRQFLAGS
158 select GENERIC_TRACER
159 select TRACER_MAX_TRACE
160 select RING_BUFFER_ALLOW_SWAP
162 This option measures the time spent in irqs-off critical
163 sections, with microsecond accuracy.
165 The default measurement method is a maximum search, which is
166 disabled by default and can be runtime (re-)started
169 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
171 (Note that kernel size and overhead increase with this option
172 enabled. This option and the preempt-off timing option can be
173 used together or separately.)
175 config PREEMPT_TRACER
176 bool "Preemption-off Latency Tracer"
178 depends on GENERIC_TIME
180 select GENERIC_TRACER
181 select TRACER_MAX_TRACE
182 select RING_BUFFER_ALLOW_SWAP
184 This option measures the time spent in preemption-off critical
185 sections, with microsecond accuracy.
187 The default measurement method is a maximum search, which is
188 disabled by default and can be runtime (re-)started
191 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
193 (Note that kernel size and overhead increase with this option
194 enabled. This option and the irqs-off timing option can be
195 used together or separately.)
197 config SYSPROF_TRACER
198 bool "Sysprof Tracer"
200 select GENERIC_TRACER
201 select CONTEXT_SWITCH_TRACER
203 This tracer provides the trace needed by the 'Sysprof' userspace
207 bool "Scheduling Latency Tracer"
208 select GENERIC_TRACER
209 select CONTEXT_SWITCH_TRACER
210 select TRACER_MAX_TRACE
212 This tracer tracks the latency of the highest priority task
213 to be scheduled in, starting from the point it has woken up.
215 config ENABLE_DEFAULT_TRACERS
216 bool "Trace process context switches and events"
217 depends on !GENERIC_TRACER
220 This tracer hooks to various trace points in the kernel,
221 allowing the user to pick and choose which trace point they
222 want to trace. It also includes the sched_switch tracer plugin.
224 config FTRACE_SYSCALLS
225 bool "Trace syscalls"
226 depends on HAVE_SYSCALL_TRACEPOINTS
227 select GENERIC_TRACER
230 Basic tracer to catch the syscall entry and exit events.
232 config TRACE_BRANCH_PROFILING
234 select GENERIC_TRACER
237 prompt "Branch Profiling"
238 default BRANCH_PROFILE_NONE
240 The branch profiling is a software profiler. It will add hooks
241 into the C conditionals to test which path a branch takes.
243 The likely/unlikely profiler only looks at the conditions that
244 are annotated with a likely or unlikely macro.
246 The "all branch" profiler will profile every if-statement in the
247 kernel. This profiler will also enable the likely/unlikely
250 Either of the above profilers adds a bit of overhead to the system.
251 If unsure, choose "No branch profiling".
253 config BRANCH_PROFILE_NONE
254 bool "No branch profiling"
256 No branch profiling. Branch profiling adds a bit of overhead.
257 Only enable it if you want to analyse the branching behavior.
258 Otherwise keep it disabled.
260 config PROFILE_ANNOTATED_BRANCHES
261 bool "Trace likely/unlikely profiler"
262 select TRACE_BRANCH_PROFILING
264 This tracer profiles all the the likely and unlikely macros
265 in the kernel. It will display the results in:
267 /sys/kernel/debug/tracing/profile_annotated_branch
269 Note: this will add a significant overhead; only turn this
270 on if you need to profile the system's use of these macros.
272 config PROFILE_ALL_BRANCHES
273 bool "Profile all if conditionals"
274 select TRACE_BRANCH_PROFILING
276 This tracer profiles all branch conditions. Every if ()
277 taken in the kernel is recorded whether it hit or miss.
278 The results will be displayed in:
280 /sys/kernel/debug/tracing/profile_branch
282 This option also enables the likely/unlikely profiler.
284 This configuration, when enabled, will impose a great overhead
285 on the system. This should only be enabled when the system
286 is to be analyzed in much detail.
289 config TRACING_BRANCHES
292 Selected by tracers that will trace the likely and unlikely
293 conditions. This prevents the tracers themselves from being
294 profiled. Profiling the tracing infrastructure can only happen
295 when the likelys and unlikelys are not being traced.
298 bool "Trace likely/unlikely instances"
299 depends on TRACE_BRANCH_PROFILING
300 select TRACING_BRANCHES
302 This traces the events of likely and unlikely condition
303 calls in the kernel. The difference between this and the
304 "Trace likely/unlikely profiler" is that this is not a
305 histogram of the callers, but actually places the calling
306 events into a running trace buffer to see when and where the
307 events happened, as well as their results.
312 bool "Trace max stack"
313 depends on HAVE_FUNCTION_TRACER
314 select FUNCTION_TRACER
318 This special tracer records the maximum stack footprint of the
319 kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
321 This tracer works by hooking into every function call that the
322 kernel executes, and keeping a maximum stack depth value and
323 stack-trace saved. If this is configured with DYNAMIC_FTRACE
324 then it will not have any overhead while the stack tracer
327 To enable the stack tracer on bootup, pass in 'stacktrace'
328 on the kernel command line.
330 The stack tracer can also be enabled or disabled via the
331 sysctl kernel.stack_tracer_enabled
335 config WORKQUEUE_TRACER
336 bool "Trace workqueues"
337 select GENERIC_TRACER
339 The workqueue tracer provides some statistical information
340 about each cpu workqueue thread such as the number of the
341 works inserted and executed since their creation. It can help
342 to evaluate the amount of work each of them has to perform.
343 For example it can help a developer to decide whether he should
344 choose a per-cpu workqueue instead of a singlethreaded one.
346 config BLK_DEV_IO_TRACE
347 bool "Support for tracing block IO actions"
353 select GENERIC_TRACER
356 Say Y here if you want to be able to trace the block layer actions
357 on a given queue. Tracing allows you to see any traffic happening
358 on a block device queue. For more information (and the userspace
359 support tools needed), fetch the blktrace tools from:
361 git://git.kernel.dk/blktrace.git
363 Tracing also is possible using the ftrace interface, e.g.:
365 echo 1 > /sys/block/sda/sda1/trace/enable
366 echo blk > /sys/kernel/debug/tracing/current_tracer
367 cat /sys/kernel/debug/tracing/trace_pipe
373 depends on HAVE_REGS_AND_STACK_ACCESS_API
374 bool "Enable kprobes-based dynamic events"
378 This allows the user to add tracing events (similar to tracepoints)
379 on the fly via the ftrace interface. See
380 Documentation/trace/kprobetrace.txt for more details.
382 Those events can be inserted wherever kprobes can probe, and record
383 various register and memory values.
385 This option is also required by perf-probe subcommand of perf tools.
386 If you want to use perf tools, this option is strongly recommended.
388 config DYNAMIC_FTRACE
389 bool "enable/disable ftrace tracepoints dynamically"
390 depends on FUNCTION_TRACER
391 depends on HAVE_DYNAMIC_FTRACE
394 This option will modify all the calls to ftrace dynamically
395 (will patch them out of the binary image and replace them
396 with a No-Op instruction) as they are called. A table is
397 created to dynamically enable them again.
399 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but
400 otherwise has native performance as long as no tracing is active.
402 The changes to the code are done by a kernel thread that
403 wakes up once a second and checks to see if any ftrace calls
404 were made. If so, it runs stop_machine (stops all CPUS)
405 and modifies the code to jump over the call to ftrace.
407 config FUNCTION_PROFILER
408 bool "Kernel function profiler"
409 depends on FUNCTION_TRACER
412 This option enables the kernel function profiler. A file is created
413 in debugfs called function_profile_enabled which defaults to zero.
414 When a 1 is echoed into this file profiling begins, and when a
415 zero is entered, profiling stops. A "functions" file is created in
416 the trace_stats directory; this file shows the list of functions that
417 have been hit and their counters.
421 config FTRACE_MCOUNT_RECORD
423 depends on DYNAMIC_FTRACE
424 depends on HAVE_FTRACE_MCOUNT_RECORD
426 config FTRACE_SELFTEST
429 config FTRACE_STARTUP_TEST
430 bool "Perform a startup test on ftrace"
431 depends on GENERIC_TRACER
432 select FTRACE_SELFTEST
434 This option performs a series of startup tests on ftrace. On bootup
435 a series of tests are made to verify that the tracer is
436 functioning properly. It will do tests on all the configured
439 config EVENT_TRACE_TEST_SYSCALLS
440 bool "Run selftest on syscall events"
441 depends on FTRACE_STARTUP_TEST
443 This option will also enable testing every syscall event.
444 It only enables the event and disables it and runs various loads
445 with the event enabled. This adds a bit more time for kernel boot
446 up since it runs this on every system call defined.
448 TBD - enable a way to actually call the syscalls as we test their
452 bool "Memory mapped IO tracing"
453 depends on HAVE_MMIOTRACE_SUPPORT && PCI
454 select GENERIC_TRACER
456 Mmiotrace traces Memory Mapped I/O access and is meant for
457 debugging and reverse engineering. It is called from the ioremap
458 implementation and works via page faults. Tracing is disabled by
459 default and can be enabled at run-time.
461 See Documentation/trace/mmiotrace.txt.
462 If you are not helping to develop drivers, say N.
464 config MMIOTRACE_TEST
465 tristate "Test module for mmiotrace"
466 depends on MMIOTRACE && m
468 This is a dumb module for testing mmiotrace. It is very dangerous
469 as it will write garbage to IO memory starting at a given address.
470 However, it should be safe to use on e.g. unused portion of VRAM.
472 Say N, unless you absolutely know what you are doing.
474 config RING_BUFFER_BENCHMARK
475 tristate "Ring buffer benchmark stress tester"
476 depends on RING_BUFFER
478 This option creates a test to stress the ring buffer and benchmark it.
479 It creates its own ring buffer such that it will not interfere with
480 any other users of the ring buffer (such as ftrace). It then creates
481 a producer and consumer that will run for 10 seconds and sleep for
482 10 seconds. Each interval it will print out the number of events
483 it recorded and give a rough estimate of how long each iteration took.
485 It does not disable interrupts or raise its priority, so it may be
486 affected by processes that are running.
492 endif # TRACING_SUPPORT