Linux Device Drivers

Driver Basics

Driver Entry and Exit points

Atomic and pointer manipulation

int atomic_read(const atomic_t * v)

read atomic variable

Parameters

const atomic_t * v
pointer of type atomic_t

Description

Atomically reads the value of v.

void atomic_set(atomic_t * v, int i)

set atomic variable

Parameters

atomic_t * v
pointer of type atomic_t
int i
required value

Description

Atomically sets the value of v to i.

void atomic_add(int i, atomic_t * v)

add integer to atomic variable

Parameters

int i
integer value to add
atomic_t * v
pointer of type atomic_t

Description

Atomically adds i to v.

void atomic_sub(int i, atomic_t * v)

subtract integer from atomic variable

Parameters

int i
integer value to subtract
atomic_t * v
pointer of type atomic_t

Description

Atomically subtracts i from v.

int atomic_sub_and_test(int i, atomic_t * v)

subtract value from variable and test result

Parameters

int i
integer value to subtract
atomic_t * v
pointer of type atomic_t

Description

Atomically subtracts i from v and returns true if the result is zero, or false for all other cases.

void atomic_inc(atomic_t * v)

increment atomic variable

Parameters

atomic_t * v
pointer of type atomic_t

Description

Atomically increments v by 1.

void atomic_dec(atomic_t * v)

decrement atomic variable

Parameters

atomic_t * v
pointer of type atomic_t

Description

Atomically decrements v by 1.

int atomic_dec_and_test(atomic_t * v)

decrement and test

Parameters

atomic_t * v
pointer of type atomic_t

Description

Atomically decrements v by 1 and returns true if the result is 0, or false for all other cases.

int atomic_inc_and_test(atomic_t * v)

increment and test

Parameters

atomic_t * v
pointer of type atomic_t

Description

Atomically increments v by 1 and returns true if the result is zero, or false for all other cases.

int atomic_add_negative(int i, atomic_t * v)

add and test if negative

Parameters

int i
integer value to add
atomic_t * v
pointer of type atomic_t

Description

Atomically adds i to v and returns true if the result is negative, or false when result is greater than or equal to zero.

int atomic_add_return(int i, atomic_t * v)

add integer and return

Parameters

int i
integer value to add
atomic_t * v
pointer of type atomic_t

Description

Atomically adds i to v and returns i + v

int atomic_sub_return(int i, atomic_t * v)

subtract integer and return

Parameters

int i
integer value to subtract
atomic_t * v
pointer of type atomic_t

Description

Atomically subtracts i from v and returns v - i

int __atomic_add_unless(atomic_t * v, int a, int u)

add unless the number is already a given value

Parameters

atomic_t * v
pointer of type atomic_t
int a
the amount to add to v...
int u
...unless v is equal to u.

Description

Atomically adds a to v, so long as v was not already u. Returns the old value of v.

short int atomic_inc_short(short int * v)

increment of a short integer

Parameters

short int * v
pointer to type int

Description

Atomically adds 1 to v Returns the new value of u

Delaying, scheduling, and timer routines

struct prev_cputime

snaphsot of system and user cputime

Definition

struct prev_cputime {
#ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
  cputime_t utime;
  cputime_t stime;
  raw_spinlock_t lock;
#endif
};

Members

cputime_t utime
time spent in user mode
cputime_t stime
time spent in system mode
raw_spinlock_t lock
protects the above two fields

Description

Stores previous user/system time values such that we can guarantee monotonicity.

struct task_cputime

collected CPU time counts

Definition

struct task_cputime {
  cputime_t utime;
  cputime_t stime;
  unsigned long long sum_exec_runtime;
};

Members

cputime_t utime
time spent in user mode, in cputime_t units
cputime_t stime
time spent in kernel mode, in cputime_t units
unsigned long long sum_exec_runtime
total time spent on the CPU, in nanoseconds

Description

This structure groups together three kinds of CPU time that are tracked for threads and thread groups. Most things considering CPU time want to group these counts together and treat all three of them in parallel.

struct thread_group_cputimer

thread group interval timer counts

Definition

struct thread_group_cputimer {
  struct task_cputime_atomic cputime_atomic;
  bool running;
  bool checking_timer;
};

Members

struct task_cputime_atomic cputime_atomic
atomic thread group interval timers.
bool running
true when there are timers running and cputime_atomic receives updates.
bool checking_timer
true when a thread in the group is in the process of checking for thread group timers.

Description

This structure contains the version of task_cputime, above, that is used for thread group CPU timer calculations.

int pid_alive(const struct task_struct * p)

check that a task structure is not stale

Parameters

const struct task_struct * p
Task structure to be checked.

Description

Test if a process is not yet dead (at most zombie state) If pid_alive fails, then pointers within the task structure can be stale and must not be dereferenced.

Return

1 if the process is alive. 0 otherwise.

int is_global_init(struct task_struct * tsk)

check if a task structure is init. Since init is free to have sub-threads we need to check tgid.

Parameters

struct task_struct * tsk
Task structure to be checked.

Description

Check if a task structure is the first user space task the kernel created.

Return

1 if the task structure is init. 0 otherwise.

int task_nice(const struct task_struct * p)

return the nice value of a given task.

Parameters

const struct task_struct * p
the task in question.

Return

The nice value [ -20 ... 0 ... 19 ].

bool is_idle_task(const struct task_struct * p)

is the specified task an idle task?

Parameters

const struct task_struct * p
the task in question.

Return

1 if p is an idle task. 0 otherwise.

void threadgroup_change_begin(struct task_struct * tsk)

mark the beginning of changes to a threadgroup

Parameters

struct task_struct * tsk
task causing the changes

Description

All operations which modify a threadgroup - a new thread joining the group, death of a member thread (the assertion of PF_EXITING) and exec(2) dethreading the process and replacing the leader - are wrapped by threadgroup_change_{begin|end}(). This is to provide a place which subsystems needing threadgroup stability can hook into for synchronization.

void threadgroup_change_end(struct task_struct * tsk)

mark the end of changes to a threadgroup

Parameters

struct task_struct * tsk
task causing the changes

Description

See threadgroup_change_begin().

int wake_up_process(struct task_struct * p)

Wake up a specific process

Parameters

struct task_struct * p
The process to be woken up.

Description

Attempt to wake up the nominated process and move it to the set of runnable processes.

Return

1 if the process was woken up, 0 if it was already running.

It may be assumed that this function implies a write memory barrier before changing the task state if and only if any tasks are woken up.

void preempt_notifier_register(struct preempt_notifier * notifier)

tell me when current is being preempted & rescheduled

Parameters

struct preempt_notifier * notifier
notifier struct to register
void preempt_notifier_unregister(struct preempt_notifier * notifier)

no longer interested in preemption notifications

Parameters

struct preempt_notifier * notifier
notifier struct to unregister

Description

This is not safe to call from within a preemption notifier.

__visible void __sched notrace preempt_schedule_notrace(void)

preempt_schedule called by tracing

Parameters

void
no arguments

Description

The tracing infrastructure uses preempt_enable_notrace to prevent recursion and tracing preempt enabling caused by the tracing infrastructure itself. But as tracing can happen in areas coming from userspace or just about to enter userspace, a preempt enable can occur before user_exit() is called. This will cause the scheduler to be called when the system is still in usermode.

To prevent this, the preempt_enable_notrace will use this function instead of preempt_schedule() to exit user context if needed before calling the scheduler.

int sched_setscheduler(struct task_struct * p, int policy, const struct sched_param * param)

change the scheduling policy and/or RT priority of a thread.

Parameters

struct task_struct * p
the task in question.
int policy
new policy.
const struct sched_param * param
structure containing the new RT priority.

Return

0 on success. An error code otherwise.

NOTE that the task may be already dead.

int sched_setscheduler_nocheck(struct task_struct * p, int policy, const struct sched_param * param)

change the scheduling policy and/or RT priority of a thread from kernelspace.

Parameters

struct task_struct * p
the task in question.
int policy
new policy.
const struct sched_param * param
structure containing the new RT priority.

Description

Just like sched_setscheduler, only don’t bother checking if the current context has permission. For example, this is needed in stop_machine(): we create temporary high priority worker threads, but our caller might not have that capability.

Return

0 on success. An error code otherwise.

void __sched yield(void)

yield the current processor to other threads.

Parameters

void
no arguments

Description

Do not ever use this function, there’s a 99% chance you’re doing it wrong.

The scheduler is at all times free to pick the calling task as the most eligible task to run, if removing the yield() call from your code breaks it, its already broken.

Typical broken usage is:

while (!event)
yield();

where one assumes that yield() will let ‘the other’ process run that will make event true. If the current task is a SCHED_FIFO task that will never happen. Never use yield() as a progress guarantee!!

If you want to use yield() to wait for something, use wait_event(). If you want to use yield() to be ‘nice’ for others, use cond_resched(). If you still want to use yield(), do not!

int __sched yield_to(struct task_struct * p, bool preempt)

yield the current processor to another thread in your thread group, or accelerate that thread toward the processor it’s on.

Parameters

struct task_struct * p
target task
bool preempt
whether task preemption is allowed or not

Description

It’s the caller’s job to ensure that the target task struct can’t go away on us before we can do any checks.

Return

true (>0) if we indeed boosted the target task. false (0) if we failed to boost the target. -ESRCH if there’s no task to yield to.
int cpupri_find(struct cpupri * cp, struct task_struct * p, struct cpumask * lowest_mask)

find the best (lowest-pri) CPU in the system

Parameters

struct cpupri * cp
The cpupri context
struct task_struct * p
The task
struct cpumask * lowest_mask
A mask to fill in with selected CPUs (or NULL)

Note

This function returns the recommended CPUs as calculated during the current invocation. By the time the call returns, the CPUs may have in fact changed priorities any number of times. While not ideal, it is not an issue of correctness since the normal rebalancer logic will correct any discrepancies created by racing against the uncertainty of the current priority configuration.

Return

(int)bool - CPUs were found

void cpupri_set(struct cpupri * cp, int cpu, int newpri)

update the cpu priority setting

Parameters

struct cpupri * cp
The cpupri context
int cpu
The target cpu
int newpri
The priority (INVALID-RT99) to assign to this CPU

Note

Assumes cpu_rq(cpu)->lock is locked

Return

(void)

int cpupri_init(struct cpupri * cp)

initialize the cpupri structure

Parameters

struct cpupri * cp
The cpupri context

Return

-ENOMEM on memory allocation failure.

void cpupri_cleanup(struct cpupri * cp)

clean up the cpupri structure

Parameters

struct cpupri * cp
The cpupri context
void cpu_load_update(struct rq * this_rq, unsigned long this_load, unsigned long pending_updates)

update the rq->cpu_load[] statistics

Parameters

struct rq * this_rq
The rq to update statistics for
unsigned long this_load
The current load
unsigned long pending_updates
The number of missed updates

Description

Update rq->cpu_load[] statistics. This function is usually called every scheduler tick (TICK_NSEC).

This function computes a decaying average:

load[i]’ = (1 - 1/2^i) * load[i] + (1/2^i) * load

Because of NOHZ it might not get called on every tick which gives need for the pending_updates argument.

load[i]_n = (1 - 1/2^i) * load[i]_n-1 + (1/2^i) * load_n-1
= A * load[i]_n-1 + B ; A := (1 - 1/2^i), B := (1/2^i) * load = A * (A * load[i]_n-2 + B) + B = A * (A * (A * load[i]_n-3 + B) + B) + B = A^3 * load[i]_n-3 + (A^2 + A + 1) * B = A^n * load[i]_0 + (A^(n-1) + A^(n-2) + ... + 1) * B = A^n * load[i]_0 + ((1 - A^n) / (1 - A)) * B = (1 - 1/2^i)^n * (load[i]_0 - load) + load

In the above we’ve assumed load_n := load, which is true for NOHZ_FULL as any change in load would have resulted in the tick being turned back on.

For regular NOHZ, this reduces to:

load[i]_n = (1 - 1/2^i)^n * load[i]_0

see decay_load_misses(). For NOHZ_FULL we get to subtract and add the extra term.

int get_sd_load_idx(struct sched_domain * sd, enum cpu_idle_type idle)

Obtain the load index for a given sched domain.

Parameters

struct sched_domain * sd
The sched_domain whose load_idx is to be obtained.
enum cpu_idle_type idle
The idle status of the CPU for whose sd load_idx is obtained.

Return

The load index.

void update_sg_lb_stats(struct lb_env * env, struct sched_group * group, int load_idx, int local_group, struct sg_lb_stats * sgs, bool * overload)

Update sched_group’s statistics for load balancing.

Parameters

struct lb_env * env
The load balancing environment.
struct sched_group * group
sched_group whose statistics are to be updated.
int load_idx
Load index of sched_domain of this_cpu for load calc.
int local_group
Does group contain this_cpu.
struct sg_lb_stats * sgs
variable to hold the statistics for this group.
bool * overload
Indicate more than one runnable task for any CPU.
bool update_sd_pick_busiest(struct lb_env * env, struct sd_lb_stats * sds, struct sched_group * sg, struct sg_lb_stats * sgs)

return 1 on busiest group

Parameters

struct lb_env * env
The load balancing environment.
struct sd_lb_stats * sds
sched_domain statistics
struct sched_group * sg
sched_group candidate to be checked for being the busiest
struct sg_lb_stats * sgs
sched_group statistics

Description

Determine if sg is a busier group than the previously selected busiest group.

Return

true if sg is a busier group than the previously selected busiest group. false otherwise.

void update_sd_lb_stats(struct lb_env * env, struct sd_lb_stats * sds)

Update sched_domain’s statistics for load balancing.

Parameters

struct lb_env * env
The load balancing environment.
struct sd_lb_stats * sds
variable to hold the statistics for this sched_domain.
int check_asym_packing(struct lb_env * env, struct sd_lb_stats * sds)

Check to see if the group is packed into the sched doman.

Parameters

struct lb_env * env
The load balancing environment.
struct sd_lb_stats * sds
Statistics of the sched_domain which is to be packed

Description

This is primarily intended to used at the sibling level. Some cores like POWER7 prefer to use lower numbered SMT threads. In the case of POWER7, it can move to lower SMT modes only when higher threads are idle. When in lower SMT modes, the threads will perform better since they share less core resources. Hence when we have idle threads, we want them to be the higher ones.

This packing function is run on idle threads. It checks to see if the busiest CPU in this domain (core in the P7 case) has a higher CPU number than the packing function is being run on. Here we are assuming lower CPU number will be equivalent to lower a SMT thread number.

Return

1 when packing is required and a task should be moved to this CPU. The amount of the imbalance is returned in *imbalance.

void fix_small_imbalance(struct lb_env * env, struct sd_lb_stats * sds)

Calculate the minor imbalance that exists amongst the groups of a sched_domain, during load balancing.

Parameters

struct lb_env * env
The load balancing environment.
struct sd_lb_stats * sds
Statistics of the sched_domain whose imbalance is to be calculated.
void calculate_imbalance(struct lb_env * env, struct sd_lb_stats * sds)

Calculate the amount of imbalance present within the groups of a given sched_domain during load balance.

Parameters

struct lb_env * env
load balance environment
struct sd_lb_stats * sds
statistics of the sched_domain whose imbalance is to be calculated.
struct sched_group * find_busiest_group(struct lb_env * env)

Returns the busiest group within the sched_domain if there is an imbalance.

Parameters

struct lb_env * env
The load balancing environment.

Description

Also calculates the amount of weighted load which should be moved to restore balance.

Return

  • The busiest group if imbalance exists.
DECLARE_COMPLETION(work)

declare and initialize a completion structure

Parameters

work
identifier for the completion structure

Description

This macro declares and initializes a completion structure. Generally used for static declarations. You should use the _ONSTACK variant for automatic variables.

DECLARE_COMPLETION_ONSTACK(work)

declare and initialize a completion structure

Parameters

work
identifier for the completion structure

Description

This macro declares and initializes a completion structure on the kernel stack.

void init_completion(struct completion * x)

Initialize a dynamically allocated completion

Parameters

struct completion * x
pointer to completion structure that is to be initialized

Description

This inline function will initialize a dynamically created completion structure.

void reinit_completion(struct completion * x)

reinitialize a completion structure

Parameters

struct completion * x
pointer to completion structure that is to be reinitialized

Description

This inline function should be used to reinitialize a completion structure so it can be reused. This is especially important after complete_all() is used.

unsigned long __round_jiffies(unsigned long j, int cpu)

function to round jiffies to a full second

Parameters

unsigned long j
the time in (absolute) jiffies that should be rounded
int cpu
the processor number on which the timeout will happen

Description

__round_jiffies() rounds an absolute time in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.

By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.

The exact rounding is skewed for each processor to avoid all processors firing at the exact same time, which could lead to lock contention or spurious cache line bouncing.

The return value is the rounded version of the j parameter.

unsigned long __round_jiffies_relative(unsigned long j, int cpu)

function to round jiffies to a full second

Parameters

unsigned long j
the time in (relative) jiffies that should be rounded
int cpu
the processor number on which the timeout will happen

Description

__round_jiffies_relative() rounds a time delta in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.

By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.

The exact rounding is skewed for each processor to avoid all processors firing at the exact same time, which could lead to lock contention or spurious cache line bouncing.

The return value is the rounded version of the j parameter.

unsigned long round_jiffies(unsigned long j)

function to round jiffies to a full second

Parameters

unsigned long j
the time in (absolute) jiffies that should be rounded

Description

round_jiffies() rounds an absolute time in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.

By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.

The return value is the rounded version of the j parameter.

unsigned long round_jiffies_relative(unsigned long j)

function to round jiffies to a full second

Parameters

unsigned long j
the time in (relative) jiffies that should be rounded

Description

round_jiffies_relative() rounds a time delta in the future (in jiffies) up or down to (approximately) full seconds. This is useful for timers for which the exact time they fire does not matter too much, as long as they fire approximately every X seconds.

By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power.

The return value is the rounded version of the j parameter.

unsigned long __round_jiffies_up(unsigned long j, int cpu)

function to round jiffies up to a full second

Parameters

unsigned long j
the time in (absolute) jiffies that should be rounded
int cpu
the processor number on which the timeout will happen

Description

This is the same as __round_jiffies() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.

unsigned long __round_jiffies_up_relative(unsigned long j, int cpu)

function to round jiffies up to a full second

Parameters

unsigned long j
the time in (relative) jiffies that should be rounded
int cpu
the processor number on which the timeout will happen

Description

This is the same as __round_jiffies_relative() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.

unsigned long round_jiffies_up(unsigned long j)

function to round jiffies up to a full second

Parameters

unsigned long j
the time in (absolute) jiffies that should be rounded

Description

This is the same as round_jiffies() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.

unsigned long round_jiffies_up_relative(unsigned long j)

function to round jiffies up to a full second

Parameters

unsigned long j
the time in (relative) jiffies that should be rounded

Description

This is the same as round_jiffies_relative() except that it will never round down. This is useful for timeouts for which the exact time of firing does not matter too much, as long as they don’t fire too early.

void set_timer_slack(struct timer_list * timer, int slack_hz)

set the allowed slack for a timer

Parameters

struct timer_list * timer
the timer to be modified
int slack_hz
the amount of time (in jiffies) allowed for rounding

Description

Set the amount of time, in jiffies, that a certain timer has in terms of slack. By setting this value, the timer subsystem will schedule the actual timer somewhere between the time mod_timer() asks for, and that time plus the slack.

By setting the slack to -1, a percentage of the delay is used instead.

void init_timer_key(struct timer_list * timer, unsigned int flags, const char * name, struct lock_class_key * key)

initialize a timer

Parameters

struct timer_list * timer
the timer to be initialized
unsigned int flags
timer flags
const char * name
name of the timer
struct lock_class_key * key
lockdep class key of the fake lock used for tracking timer sync lock dependencies

Description

init_timer_key() must be done to a timer prior calling any of the other timer functions.

int mod_timer_pending(struct timer_list * timer, unsigned long expires)

modify a pending timer’s timeout

Parameters

struct timer_list * timer
the pending timer to be modified
unsigned long expires
new timeout in jiffies

Description

mod_timer_pending() is the same for pending timers as mod_timer(), but will not re-activate and modify already deleted timers.

It is useful for unserialized use of timers.

int mod_timer(struct timer_list * timer, unsigned long expires)

modify a timer’s timeout

Parameters

struct timer_list * timer
the timer to be modified
unsigned long expires
new timeout in jiffies

Description

mod_timer() is a more efficient way to update the expire field of an active timer (if the timer is inactive it will be activated)

mod_timer(timer, expires) is equivalent to:

del_timer(timer); timer->expires = expires; add_timer(timer);

Note that if there are multiple unserialized concurrent users of the same timer, then mod_timer() is the only safe way to modify the timeout, since add_timer() cannot modify an already running timer.

The function returns whether it has modified a pending timer or not. (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an active timer returns 1.)

int mod_timer_pinned(struct timer_list * timer, unsigned long expires)

modify a timer’s timeout

Parameters

struct timer_list * timer
the timer to be modified
unsigned long expires
new timeout in jiffies

Description

mod_timer_pinned() is a way to update the expire field of an active timer (if the timer is inactive it will be activated) and to ensure that the timer is scheduled on the current CPU.

Note that this does not prevent the timer from being migrated when the current CPU goes offline. If this is a problem for you, use CPU-hotplug notifiers to handle it correctly, for example, cancelling the timer when the corresponding CPU goes offline.

mod_timer_pinned(timer, expires) is equivalent to:

del_timer(timer); timer->expires = expires; add_timer(timer);
void add_timer(struct timer_list * timer)

start a timer

Parameters

struct timer_list * timer
the timer to be added

Description

The kernel will do a ->function(->data) callback from the timer interrupt at the ->expires point in the future. The current time is ‘jiffies’.

The timer’s ->expires, ->function (and if the handler uses it, ->data) fields must be set prior calling this function.

Timers with an ->expires field in the past will be executed in the next timer tick.

void add_timer_on(struct timer_list * timer, int cpu)

start a timer on a particular CPU

Parameters

struct timer_list * timer
the timer to be added
int cpu
the CPU to start it on

Description

This is not very scalable on SMP. Double adds are not possible.

int del_timer(struct timer_list * timer)

deactive a timer.

Parameters

struct timer_list * timer
the timer to be deactivated

Description

del_timer() deactivates a timer - this works on both active and inactive timers.

The function returns whether it has deactivated a pending timer or not. (ie. del_timer() of an inactive timer returns 0, del_timer() of an active timer returns 1.)

int try_to_del_timer_sync(struct timer_list * timer)

Try to deactivate a timer

Parameters

struct timer_list * timer
timer do del

Description

This function tries to deactivate a timer. Upon successful (ret >= 0) exit the timer is not queued and the handler is not running on any CPU.

int del_timer_sync(struct timer_list * timer)

deactivate a timer and wait for the handler to finish.

Parameters

struct timer_list * timer
the timer to be deactivated

Description

This function only differs from del_timer() on SMP: besides deactivating the timer it also makes sure the handler has finished executing on other CPUs.

Synchronization rules: Callers must prevent restarting of the timer, otherwise this function is meaningless. It must not be called from interrupt contexts unless the timer is an irqsafe one. The caller must not hold locks which would prevent completion of the timer’s handler. The timer’s handler must not call add_timer_on(). Upon exit the timer is not queued and the handler is not running on any CPU.

Note

For !irqsafe timers, you must not hold locks that are held in

interrupt context while calling this function. Even if the lock has nothing to do with the timer in question. Here’s why:

CPU0 CPU1 —- —-

<SOFTIRQ> call_timer_fn();

base->running_timer = mytimer;
spin_lock_irq(somelock);
<IRQ>
spin_lock(somelock);
del_timer_sync(mytimer);
while (base->running_timer == mytimer);

Now del_timer_sync() will never return and never release somelock. The interrupt on the other CPU is waiting to grab somelock but it has interrupted the softirq that CPU0 is waiting to finish.

The function returns whether it has deactivated a pending timer or not.

signed long __sched schedule_timeout(signed long timeout)

sleep until timeout

Parameters

signed long timeout
timeout value in jiffies

Description

Make the current task sleep until timeout jiffies have elapsed. The routine will return immediately unless the current task state has been set (see set_current_state()).

You can set the task state as follows -

TASK_UNINTERRUPTIBLE - at least timeout jiffies are guaranteed to pass before the routine returns. The routine will return 0

TASK_INTERRUPTIBLE - the routine may return early if a signal is delivered to the current task. In this case the remaining time in jiffies will be returned, or 0 if the timer expired in time

The current task state is guaranteed to be TASK_RUNNING when this routine returns.

Specifying a timeout value of MAX_SCHEDULE_TIMEOUT will schedule the CPU away without a bound on the timeout. In this case the return value will be MAX_SCHEDULE_TIMEOUT.

In all cases the return value is guaranteed to be non-negative.

void msleep(unsigned int msecs)

sleep safely even with waitqueue interruptions

Parameters

unsigned int msecs
Time in milliseconds to sleep for
unsigned long msleep_interruptible(unsigned int msecs)

sleep waiting for signals

Parameters

unsigned int msecs
Time in milliseconds to sleep for
void __sched usleep_range(unsigned long min, unsigned long max)

Drop in replacement for udelay where wakeup is flexible

Parameters

unsigned long min
Minimum time in usecs to sleep
unsigned long max
Maximum time in usecs to sleep

Wait queues and Wake events

int waitqueue_active(wait_queue_head_t * q)
  • locklessly test for waiters on the queue

Parameters

wait_queue_head_t * q
the waitqueue to test for waiters

Description

returns true if the wait list is not empty

NOTE

this function is lockless and requires care, incorrect usage _will_ lead to sporadic and non-obvious failure.

Use either while holding wait_queue_head_t::lock or when used for wakeups with an extra smp_mb() like:

CPU0 - waker CPU1 - waiter

for (;;) {

cond = true; prepare_to_wait(wq, wait, state); smp_mb(); // smp_mb() from set_current_state() if (waitqueue_active(wq)) if (cond)

wake_up(wq); break;
schedule();

} finish_wait(wq, wait);

Because without the explicit smp_mb() it’s possible for the waitqueue_active() load to get hoisted over the cond store such that we’ll observe an empty wait list while the waiter might not observe cond.

Also note that this ‘optimization’ trades a spin_lock() for an smp_mb(), which (when the lock is uncontended) are of roughly equal cost.

bool wq_has_sleeper(wait_queue_head_t * wq)

check if there are any waiting processes

Parameters

wait_queue_head_t * wq
wait queue head

Description

Returns true if wq has waiting processes

Please refer to the comment for waitqueue_active.

wait_event(wq, condition)

sleep until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

wait_event_freezable(wq, condition)

sleep (or freeze) until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_INTERRUPTIBLE – so as not to contribute to system load) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

wait_event_timeout(wq, condition, timeout)

sleep until a condition gets true or a timeout elapses

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
timeout
timeout, in jiffies

Description

The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

Return

0 if the condition evaluated to false after the timeout elapsed, 1 if the condition evaluated to true after the timeout elapsed, or the remaining jiffies (at least 1) if the condition evaluated to true before the timeout elapsed.

wait_event_cmd(wq, condition, cmd1, cmd2)

sleep until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
cmd1
the command will be executed before sleep
cmd2
the command will be executed after sleep

Description

The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

wait_event_interruptible(wq, condition)

sleep until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_interruptible_timeout(wq, condition, timeout)

sleep until a condition gets true or a timeout elapses

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
timeout
timeout, in jiffies

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

Return

0 if the condition evaluated to false after the timeout elapsed, 1 if the condition evaluated to true after the timeout elapsed, the remaining jiffies (at least 1) if the condition evaluated to true before the timeout elapsed, or -ERESTARTSYS if it was interrupted by a signal.

wait_event_hrtimeout(wq, condition, timeout)

sleep until a condition gets true or a timeout elapses

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
timeout
timeout, as a ktime_t

Description

The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

The function returns 0 if condition became true, or -ETIME if the timeout elapsed.

wait_event_interruptible_hrtimeout(wq, condition, timeout)

sleep until a condition gets true or a timeout elapses

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
timeout
timeout, as a ktime_t

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

The function returns 0 if condition became true, -ERESTARTSYS if it was interrupted by a signal, or -ETIME if the timeout elapsed.

wait_event_interruptible_locked(wq, condition)

sleep until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.

The lock is locked/unlocked using spin_lock()/spin_unlock() functions which must match the way they are locked/unlocked outside of this macro.

wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.

The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_interruptible_locked_irq(wq, condition)

sleep until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.

The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq() functions which must match the way they are locked/unlocked outside of this macro.

wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.

The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_interruptible_exclusive_locked(wq, condition)

sleep exclusively until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.

The lock is locked/unlocked using spin_lock()/spin_unlock() functions which must match the way they are locked/unlocked outside of this macro.

The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus when other process waits process on the list if this process is awaken further processes are not considered.

wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.

The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_interruptible_exclusive_locked_irq(wq, condition)

sleep until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

It must be called with wq.lock being held. This spinlock is unlocked while sleeping but condition testing is done while lock is held and when this macro exits the lock is held.

The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq() functions which must match the way they are locked/unlocked outside of this macro.

The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag set thus when other process waits process on the list if this process is awaken further processes are not considered.

wake_up_locked() has to be called after changing any variable that could change the result of the wait condition.

The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_killable(wq, condition)

sleep until a condition gets true

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for

Description

The process is put to sleep (TASK_KILLABLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

The function will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_lock_irq_cmd(wq, condition, lock, cmd)

sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
lock
a locked spinlock_t, which will be released before cmd and schedule() and reacquired afterwards.
cmd
a command which is invoked outside the critical section before sleep

Description

The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

This is supposed to be called while holding the lock. The lock is dropped before invoking the cmd and going to sleep and is reacquired afterwards.

wait_event_lock_irq(wq, condition, lock)

sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
lock
a locked spinlock_t, which will be released before schedule() and reacquired afterwards.

Description

The process is put to sleep (TASK_UNINTERRUPTIBLE) until the condition evaluates to true. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.

wait_event_interruptible_lock_irq_cmd(wq, condition, lock, cmd)

sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
lock
a locked spinlock_t, which will be released before cmd and schedule() and reacquired afterwards.
cmd
a command which is invoked outside the critical section before sleep

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or a signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

This is supposed to be called while holding the lock. The lock is dropped before invoking the cmd and going to sleep and is reacquired afterwards.

The macro will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_interruptible_lock_irq(wq, condition, lock)

sleep until a condition gets true. The condition is checked under the lock. This is expected to be called with the lock taken.

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
lock
a locked spinlock_t, which will be released before schedule() and reacquired afterwards.

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.

The macro will return -ERESTARTSYS if it was interrupted by a signal and 0 if condition evaluated to true.

wait_event_interruptible_lock_irq_timeout(wq, condition, lock, timeout)

sleep until a condition gets true or a timeout elapses. The condition is checked under the lock. This is expected to be called with the lock taken.

Parameters

wq
the waitqueue to wait on
condition
a C expression for the event to wait for
lock
a locked spinlock_t, which will be released before schedule() and reacquired afterwards.
timeout
timeout, in jiffies

Description

The process is put to sleep (TASK_INTERRUPTIBLE) until the condition evaluates to true or signal is received. The condition is checked each time the waitqueue wq is woken up.

wake_up() has to be called after changing any variable that could change the result of the wait condition.

This is supposed to be called while holding the lock. The lock is dropped before going to sleep and is reacquired afterwards.

The function returns 0 if the timeout elapsed, -ERESTARTSYS if it was interrupted by a signal, and the remaining jiffies otherwise if the condition evaluated to true before the timeout elapsed.

int wait_on_bit(unsigned long * word, int bit, unsigned mode)

wait for a bit to be cleared

Parameters

unsigned long * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on
unsigned mode
the task state to sleep in

Description

There is a standard hashed waitqueue table for generic use. This is the part of the hashtable’s accessor API that waits on a bit. For instance, if one were to have waiters on a bitflag, one would call wait_on_bit() in threads waiting for the bit to clear. One uses wait_on_bit() where one is waiting for the bit to clear, but has no intention of setting it. Returned value will be zero if the bit was cleared, or non-zero if the process received a signal and the mode permitted wakeup on that signal.

int wait_on_bit_io(unsigned long * word, int bit, unsigned mode)

wait for a bit to be cleared

Parameters

unsigned long * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on
unsigned mode
the task state to sleep in

Description

Use the standard hashed waitqueue table to wait for a bit to be cleared. This is similar to wait_on_bit(), but calls io_schedule() instead of schedule() for the actual waiting.

Returned value will be zero if the bit was cleared, or non-zero if the process received a signal and the mode permitted wakeup on that signal.

int wait_on_bit_timeout(unsigned long * word, int bit, unsigned mode, unsigned long timeout)

wait for a bit to be cleared or a timeout elapses

Parameters

unsigned long * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on
unsigned mode
the task state to sleep in
unsigned long timeout
timeout, in jiffies

Description

Use the standard hashed waitqueue table to wait for a bit to be cleared. This is similar to wait_on_bit(), except also takes a timeout parameter.

Returned value will be zero if the bit was cleared before the timeout elapsed, or non-zero if the timeout elapsed or process received a signal and the mode permitted wakeup on that signal.

int wait_on_bit_action(unsigned long * word, int bit, wait_bit_action_f * action, unsigned mode)

wait for a bit to be cleared

Parameters

unsigned long * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on
wait_bit_action_f * action
the function used to sleep, which may take special actions
unsigned mode
the task state to sleep in

Description

Use the standard hashed waitqueue table to wait for a bit to be cleared, and allow the waiting action to be specified. This is like wait_on_bit() but allows fine control of how the waiting is done.

Returned value will be zero if the bit was cleared, or non-zero if the process received a signal and the mode permitted wakeup on that signal.

int wait_on_bit_lock(unsigned long * word, int bit, unsigned mode)

wait for a bit to be cleared, when wanting to set it

Parameters

unsigned long * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on
unsigned mode
the task state to sleep in

Description

There is a standard hashed waitqueue table for generic use. This is the part of the hashtable’s accessor API that waits on a bit when one intends to set it, for instance, trying to lock bitflags. For instance, if one were to have waiters trying to set bitflag and waiting for it to clear before setting it, one would call wait_on_bit() in threads waiting to be able to set the bit. One uses wait_on_bit_lock() where one is waiting for the bit to clear with the intention of setting it, and when done, clearing it.

Returns zero if the bit was (eventually) found to be clear and was set. Returns non-zero if a signal was delivered to the process and the mode allows that signal to wake the process.

int wait_on_bit_lock_io(unsigned long * word, int bit, unsigned mode)

wait for a bit to be cleared, when wanting to set it

Parameters

unsigned long * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on
unsigned mode
the task state to sleep in

Description

Use the standard hashed waitqueue table to wait for a bit to be cleared and then to atomically set it. This is similar to wait_on_bit(), but calls io_schedule() instead of schedule() for the actual waiting.

Returns zero if the bit was (eventually) found to be clear and was set. Returns non-zero if a signal was delivered to the process and the mode allows that signal to wake the process.

int wait_on_bit_lock_action(unsigned long * word, int bit, wait_bit_action_f * action, unsigned mode)

wait for a bit to be cleared, when wanting to set it

Parameters

unsigned long * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on
wait_bit_action_f * action
the function used to sleep, which may take special actions
unsigned mode
the task state to sleep in

Description

Use the standard hashed waitqueue table to wait for a bit to be cleared and then to set it, and allow the waiting action to be specified. This is like wait_on_bit() but allows fine control of how the waiting is done.

Returns zero if the bit was (eventually) found to be clear and was set. Returns non-zero if a signal was delivered to the process and the mode allows that signal to wake the process.

int wait_on_atomic_t(atomic_t * val, int (*action) (atomic_t *, unsigned mode)

Wait for an atomic_t to become 0

Parameters

atomic_t * val
The atomic value being waited on, a kernel virtual address
int (*)(atomic_t *) action
the function used to sleep, which may take special actions
unsigned mode
the task state to sleep in

Description

Wait for an atomic_t to become 0. We abuse the bit-wait waitqueue table for the purpose of getting a waitqueue, but we set the key to a bit number outside of the target ‘word’.

void __wake_up(wait_queue_head_t * q, unsigned int mode, int nr_exclusive, void * key)

wake up threads blocked on a waitqueue.

Parameters

wait_queue_head_t * q
the waitqueue
unsigned int mode
which threads
int nr_exclusive
how many wake-one or wake-many threads to wake up
void * key
is directly passed to the wakeup function

Description

It may be assumed that this function implies a write memory barrier before changing the task state if and only if any tasks are woken up.

void __wake_up_sync_key(wait_queue_head_t * q, unsigned int mode, int nr_exclusive, void * key)

wake up threads blocked on a waitqueue.

Parameters

wait_queue_head_t * q
the waitqueue
unsigned int mode
which threads
int nr_exclusive
how many wake-one or wake-many threads to wake up
void * key
opaque value to be passed to wakeup targets

Description

The sync wakeup differs that the waker knows that it will schedule away soon, so while the target thread will be woken up, it will not be migrated to another CPU - ie. the two threads are ‘synchronized’ with each other. This can prevent needless bouncing between CPUs.

On UP it can prevent extra preemption.

It may be assumed that this function implies a write memory barrier before changing the task state if and only if any tasks are woken up.

void finish_wait(wait_queue_head_t * q, wait_queue_t * wait)

clean up after waiting in a queue

Parameters

wait_queue_head_t * q
waitqueue waited on
wait_queue_t * wait
wait descriptor

Description

Sets current thread back to running state and removes the wait descriptor from the given waitqueue if still queued.

void abort_exclusive_wait(wait_queue_head_t * q, wait_queue_t * wait, unsigned int mode, void * key)

abort exclusive waiting in a queue

Parameters

wait_queue_head_t * q
waitqueue waited on
wait_queue_t * wait
wait descriptor
unsigned int mode
runstate of the waiter to be woken
void * key
key to identify a wait bit queue or NULL

Description

Sets current thread back to running state and removes the wait descriptor from the given waitqueue if still queued.

Wakes up the next waiter if the caller is concurrently woken up through the queue.

This prevents waiter starvation where an exclusive waiter aborts and is woken up concurrently and no one wakes up the next waiter.

void wake_up_bit(void * word, int bit)

wake up a waiter on a bit

Parameters

void * word
the word being waited on, a kernel virtual address
int bit
the bit of the word being waited on

Description

There is a standard hashed waitqueue table for generic use. This is the part of the hashtable’s accessor API that wakes up waiters on a bit. For instance, if one were to have waiters on a bitflag, one would call wake_up_bit() after clearing the bit.

In order for this to function properly, as it uses waitqueue_active() internally, some kind of memory barrier must be done prior to calling this. Typically, this will be smp_mb__after_atomic(), but in some cases where bitflags are manipulated non-atomically under a lock, one may need to use a less regular barrier, such fs/inode.c’s smp_mb(), because spin_unlock() does not guarantee a memory barrier.

void wake_up_atomic_t(atomic_t * p)

Wake up a waiter on a atomic_t

Parameters

atomic_t * p
The atomic_t being waited on, a kernel virtual address

Description

Wake up anyone waiting for the atomic_t to go to zero.

Abuse the bit-waker function and its waitqueue hash table set (the atomic_t check is done by the waiter’s wake function, not the by the waker itself).

High-resolution timers

ktime_t ktime_set(const s64 secs, const unsigned long nsecs)

Set a ktime_t variable from a seconds/nanoseconds value

Parameters

const s64 secs
seconds to set
const unsigned long nsecs
nanoseconds to set

Return

The ktime_t representation of the value.

int ktime_equal(const ktime_t cmp1, const ktime_t cmp2)

Compares two ktime_t variables to see if they are equal

Parameters

const ktime_t cmp1
comparable1
const ktime_t cmp2
comparable2

Description

Compare two ktime_t variables.

Return

1 if equal.

int ktime_compare(const ktime_t cmp1, const ktime_t cmp2)

Compares two ktime_t variables for less, greater or equal

Parameters

const ktime_t cmp1
comparable1
const ktime_t cmp2
comparable2

Return

...
cmp1 < cmp2: return <0 cmp1 == cmp2: return 0 cmp1 > cmp2: return >0
bool ktime_after(const ktime_t cmp1, const ktime_t cmp2)

Compare if a ktime_t value is bigger than another one.

Parameters

const ktime_t cmp1
comparable1
const ktime_t cmp2
comparable2

Return

true if cmp1 happened after cmp2.

bool ktime_before(const ktime_t cmp1, const ktime_t cmp2)

Compare if a ktime_t value is smaller than another one.

Parameters

const ktime_t cmp1
comparable1
const ktime_t cmp2
comparable2

Return

true if cmp1 happened before cmp2.

bool ktime_to_timespec_cond(const ktime_t kt, struct timespec * ts)

convert a ktime_t variable to timespec format only if the variable contains data

Parameters

const ktime_t kt
the ktime_t variable to convert
struct timespec * ts
the timespec variable to store the result in

Return

true if there was a successful conversion, false if kt was 0.

bool ktime_to_timespec64_cond(const ktime_t kt, struct timespec64 * ts)

convert a ktime_t variable to timespec64 format only if the variable contains data

Parameters

const ktime_t kt
the ktime_t variable to convert
struct timespec64 * ts
the timespec variable to store the result in

Return

true if there was a successful conversion, false if kt was 0.

struct hrtimer

the basic hrtimer structure

Definition

struct hrtimer {
  struct timerqueue_node node;
  ktime_t _softexpires;
  enum hrtimer_restart                (* function) (struct hrtimer *);
  struct hrtimer_clock_base * base;
  u8 state;
  u8 is_rel;
#ifdef CONFIG_TIMER_STATS
  int start_pid;
  void * start_site;
  char start_comm[16];
#endif
};

Members

struct timerqueue_node node
timerqueue node, which also manages node.expires, the absolute expiry time in the hrtimers internal representation. The time is related to the clock on which the timer is based. Is setup by adding slack to the _softexpires value. For non range timers identical to _softexpires.
ktime_t _softexpires
the absolute earliest expiry time of the hrtimer. The time which was given as expiry time when the timer was armed.
enum hrtimer_restart          (*)(struct hrtimer *) function
timer expiry callback function
struct hrtimer_clock_base * base
pointer to the timer base (per cpu and per clock)
u8 state
state information (See bit values above)
u8 is_rel
Set if the timer was armed relative
int start_pid
timer statistics field to store the pid of the task which started the timer
void * start_site
timer statistics field to store the site where the timer was started
char start_comm[16]
timer statistics field to store the name of the process which started the timer

Description

The hrtimer structure must be initialized by hrtimer_init()

struct hrtimer_sleeper

simple sleeper structure

Definition

struct hrtimer_sleeper {
  struct hrtimer timer;
  struct task_struct * task;
};

Members

struct hrtimer timer
embedded timer structure
struct task_struct * task
task to wake up

Description

task is set to NULL, when the timer expires.

struct hrtimer_clock_base

the timer base for a specific clock

Definition

struct hrtimer_clock_base {
  struct hrtimer_cpu_base * cpu_base;
  int index;
  clockid_t clockid;
  struct timerqueue_head active;
  ktime_t (* get_time) (void);
  ktime_t offset;
};

Members

struct hrtimer_cpu_base * cpu_base
per cpu clock base
int index
clock type index for per_cpu support when moving a timer to a base on another cpu.
clockid_t clockid
clock id for per_cpu support
struct timerqueue_head active
red black tree root node for the active timers
ktime_t (*)(void) get_time
function to retrieve the current time of the clock
ktime_t offset
offset of this clock to the monotonic base
void hrtimer_start(struct hrtimer * timer, ktime_t tim, const enum hrtimer_mode mode)

(re)start an hrtimer on the current CPU

Parameters

struct hrtimer * timer
the timer to be added
ktime_t tim
expiry time
const enum hrtimer_mode mode
expiry mode: absolute (HRTIMER_MODE_ABS) or relative (HRTIMER_MODE_REL)
u64 hrtimer_forward_now(struct hrtimer * timer, ktime_t interval)

forward the timer expiry so it expires after now

Parameters

struct hrtimer * timer
hrtimer to forward
ktime_t interval
the interval to forward

Description

Forward the timer expiry so it will expire after the current time of the hrtimer clock base. Returns the number of overruns.

Can be safely called from the callback function of timer. If called from other contexts timer must neither be enqueued nor running the callback and the caller needs to take care of serialization.

Note

This only updates the timer expiry value and does not requeue the timer.

u64 hrtimer_forward(struct hrtimer * timer, ktime_t now, ktime_t interval)

forward the timer expiry

Parameters

struct hrtimer * timer
hrtimer to forward
ktime_t now
forward past this time
ktime_t interval
the interval to forward

Description

Forward the timer expiry so it will expire in the future. Returns the number of overruns.

Can be safely called from the callback function of timer. If called from other contexts timer must neither be enqueued nor running the callback and the caller needs to take care of serialization.

Note

This only updates the timer expiry value and does not requeue the timer.

void hrtimer_start_range_ns(struct hrtimer * timer, ktime_t tim, u64 delta_ns, const enum hrtimer_mode mode)

(re)start an hrtimer on the current CPU

Parameters

struct hrtimer * timer
the timer to be added
ktime_t tim
expiry time
u64 delta_ns
“slack” range for the timer
const enum hrtimer_mode mode
expiry mode: absolute (HRTIMER_MODE_ABS) or relative (HRTIMER_MODE_REL)
int hrtimer_try_to_cancel(struct hrtimer * timer)

try to deactivate a timer

Parameters

struct hrtimer * timer
hrtimer to stop

Return

0 when the timer was not active 1 when the timer was active
-1 when the timer is currently excuting the callback function and
cannot be stopped
int hrtimer_cancel(struct hrtimer * timer)

cancel a timer and wait for the handler to finish.

Parameters

struct hrtimer * timer
the timer to be cancelled

Return

0 when the timer was not active 1 when the timer was active
ktime_t __hrtimer_get_remaining(const struct hrtimer * timer, bool adjust)

get remaining time for the timer

Parameters

const struct hrtimer * timer
the timer to read
bool adjust
adjust relative timers when CONFIG_TIME_LOW_RES=y
void hrtimer_init(struct hrtimer * timer, clockid_t clock_id, enum hrtimer_mode mode)

initialize a timer to the given clock

Parameters

struct hrtimer * timer
the timer to be initialized
clockid_t clock_id
the clock to be used
enum hrtimer_mode mode
timer mode abs/rel
int __sched schedule_hrtimeout_range(ktime_t * expires, u64 delta, const enum hrtimer_mode mode)

sleep until timeout

Parameters

ktime_t * expires
timeout value (ktime_t)
u64 delta
slack in expires timeout (ktime_t)
const enum hrtimer_mode mode
timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL

Description

Make the current task sleep until the given expiry time has elapsed. The routine will return immediately unless the current task state has been set (see set_current_state()).

The delta argument gives the kernel the freedom to schedule the actual wakeup to a time that is both power and performance friendly. The kernel give the normal best effort behavior for “expires**+**delta”, but may decide to fire the timer earlier, but no earlier than expires.

You can set the task state as follows -

TASK_UNINTERRUPTIBLE - at least timeout time is guaranteed to pass before the routine returns.

TASK_INTERRUPTIBLE - the routine may return early if a signal is delivered to the current task.

The current task state is guaranteed to be TASK_RUNNING when this routine returns.

Returns 0 when the timer has expired otherwise -EINTR

int __sched schedule_hrtimeout(ktime_t * expires, const enum hrtimer_mode mode)

sleep until timeout

Parameters

ktime_t * expires
timeout value (ktime_t)
const enum hrtimer_mode mode
timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL

Description

Make the current task sleep until the given expiry time has elapsed. The routine will return immediately unless the current task state has been set (see set_current_state()).

You can set the task state as follows -

TASK_UNINTERRUPTIBLE - at least timeout time is guaranteed to pass before the routine returns.

TASK_INTERRUPTIBLE - the routine may return early if a signal is delivered to the current task.

The current task state is guaranteed to be TASK_RUNNING when this routine returns.

Returns 0 when the timer has expired otherwise -EINTR

Workqueues and Kevents

work_pending(work)

Find out whether a work item is currently pending

Parameters

work
The work item in question
delayed_work_pending(w)

Find out whether a delayable work item is currently pending

Parameters

w
The work item in question
alloc_workqueue(fmt, flags, max_active, args...)

allocate a workqueue

Parameters

fmt
printf format for the name of the workqueue
flags
WQ_* flags
max_active
max in-flight work items, 0 for default args...: args for fmt
args...
variable arguments

Description

Allocate a workqueue with the specified parameters. For detailed information on WQ_* flags, please refer to Documentation/workqueue.txt.

The __lock_name macro dance is to guarantee that single lock_class_key doesn’t end up with different namesm, which isn’t allowed by lockdep.

Return

Pointer to the allocated workqueue on success, NULL on failure.

alloc_ordered_workqueue(fmt, flags, args...)

allocate an ordered workqueue

Parameters

fmt
printf format for the name of the workqueue
flags
WQ_* flags (only WQ_FREEZABLE and WQ_MEM_RECLAIM are meaningful) args...: args for fmt
args...
variable arguments

Description

Allocate an ordered workqueue. An ordered workqueue executes at most one work item at any given time in the queued order. They are implemented as unbound workqueues with max_active of one.

Return

Pointer to the allocated workqueue on success, NULL on failure.

bool queue_work(struct workqueue_struct * wq, struct work_struct * work)

queue work on a workqueue

Parameters

struct workqueue_struct * wq
workqueue to use
struct work_struct * work
work to queue

Description

Returns false if work was already on a queue, true otherwise.

We queue the work to the CPU on which it was submitted, but if the CPU dies it can be processed by another CPU.

bool queue_delayed_work(struct workqueue_struct * wq, struct delayed_work * dwork, unsigned long delay)

queue work on a workqueue after delay

Parameters

struct workqueue_struct * wq
workqueue to use
struct delayed_work * dwork
delayable work to queue
unsigned long delay
number of jiffies to wait before queueing

Description

Equivalent to queue_delayed_work_on() but tries to use the local CPU.

bool mod_delayed_work(struct workqueue_struct * wq, struct delayed_work * dwork, unsigned long delay)

modify delay of or queue a delayed work

Parameters

struct workqueue_struct * wq
workqueue to use
struct delayed_work * dwork
work to queue
unsigned long delay
number of jiffies to wait before queueing

Description

mod_delayed_work_on() on local CPU.

bool schedule_work_on(int cpu, struct work_struct * work)

put work task on a specific cpu

Parameters

int cpu
cpu to put the work task on
struct work_struct * work
job to be done

Description

This puts a job on a specific cpu

bool schedule_work(struct work_struct * work)

put work task in global workqueue

Parameters

struct work_struct * work
job to be done

Description

Returns false if work was already on the kernel-global workqueue and true otherwise.

This puts a job in the kernel-global workqueue if it was not already queued and leaves it in the same position on the kernel-global workqueue otherwise.

void flush_scheduled_work(void)

ensure that any scheduled work has run to completion.

Parameters

void
no arguments

Description

Forces execution of the kernel-global workqueue and blocks until its completion.

Think twice before calling this function! It’s very easy to get into trouble if you don’t take great care. Either of the following situations will lead to deadlock:

One of the work items currently on the workqueue needs to acquire a lock held by your code or its caller.

Your code is running in the context of a work routine.

They will be detected by lockdep when they occur, but the first might not occur very often. It depends on what work items are on the workqueue and what locks they need, which you have no control over.

In most situations flushing the entire workqueue is overkill; you merely need to know that a particular work item isn’t queued and isn’t running. In such cases you should use cancel_delayed_work_sync() or cancel_work_sync() instead.

bool schedule_delayed_work_on(int cpu, struct delayed_work * dwork, unsigned long delay)

queue work in global workqueue on CPU after delay

Parameters

int cpu
cpu to use
struct delayed_work * dwork
job to be done
unsigned long delay
number of jiffies to wait

Description

After waiting for a given time this puts a job in the kernel-global workqueue on the specified CPU.

bool schedule_delayed_work(struct delayed_work * dwork, unsigned long delay)

put work task in global workqueue after delay

Parameters

struct delayed_work * dwork
job to be done
unsigned long delay
number of jiffies to wait or 0 for immediate execution

Description

After waiting for a given time this puts a job in the kernel-global workqueue.

bool keventd_up(void)

is workqueue initialized yet?

Parameters

void
no arguments
bool queue_work_on(int cpu, struct workqueue_struct * wq, struct work_struct * work)

queue work on specific cpu

Parameters

int cpu
CPU number to execute work on
struct workqueue_struct * wq
workqueue to use
struct work_struct * work
work to queue

Description

We queue the work to a specific CPU, the caller must ensure it can’t go away.

Return

false if work was already on a queue, true otherwise.

bool queue_delayed_work_on(int cpu, struct workqueue_struct * wq, struct delayed_work * dwork, unsigned long delay)

queue work on specific CPU after delay

Parameters

int cpu
CPU number to execute work on
struct workqueue_struct * wq
workqueue to use
struct delayed_work * dwork
work to queue
unsigned long delay
number of jiffies to wait before queueing

Return

false if work was already on a queue, true otherwise. If delay is zero and dwork is idle, it will be scheduled for immediate execution.

bool mod_delayed_work_on(int cpu, struct workqueue_struct * wq, struct delayed_work * dwork, unsigned long delay)

modify delay of or queue a delayed work on specific CPU

Parameters

int cpu
CPU number to execute work on
struct workqueue_struct * wq
workqueue to use
struct delayed_work * dwork
work to queue
unsigned long delay
number of jiffies to wait before queueing

Description

If dwork is idle, equivalent to queue_delayed_work_on(); otherwise, modify dwork‘s timer so that it expires after delay. If delay is zero, work is guaranteed to be scheduled immediately regardless of its current state.

Return

false if dwork was idle and queued, true if dwork was pending and its timer was modified.

This function is safe to call from any context including IRQ handler. See try_to_grab_pending() for details.

void flush_workqueue(struct workqueue_struct * wq)

ensure that any scheduled work has run to completion.

Parameters

struct workqueue_struct * wq
workqueue to flush

Description

This function sleeps until all work items which were queued on entry have finished execution, but it is not livelocked by new incoming ones.

void drain_workqueue(struct workqueue_struct * wq)

drain a workqueue

Parameters

struct workqueue_struct * wq
workqueue to drain

Description

Wait until the workqueue becomes empty. While draining is in progress, only chain queueing is allowed. IOW, only currently pending or running work items on wq can queue further work items on it. wq is flushed repeatedly until it becomes empty. The number of flushing is determined by the depth of chaining and should be relatively short. Whine if it takes too long.

bool flush_work(struct work_struct * work)

wait for a work to finish executing the last queueing instance

Parameters

struct work_struct * work
the work to flush

Description

Wait until work has finished execution. work is guaranteed to be idle on return if it hasn’t been requeued since flush started.

Return

true if flush_work() waited for the work to finish execution, false if it was already idle.

bool cancel_work_sync(struct work_struct * work)

cancel a work and wait for it to finish

Parameters

struct work_struct * work
the work to cancel

Description

Cancel work and wait for its execution to finish. This function can be used even if the work re-queues itself or migrates to another workqueue. On return from this function, work is guaranteed to be not pending or executing on any CPU.

cancel_work_sync(delayed_work->work) must not be used for delayed_work’s. Use cancel_delayed_work_sync() instead.

The caller must ensure that the workqueue on which work was last queued can’t be destroyed before this function returns.

Return

true if work was pending, false otherwise.

bool flush_delayed_work(struct delayed_work * dwork)

wait for a dwork to finish executing the last queueing

Parameters

struct delayed_work * dwork
the delayed work to flush

Description

Delayed timer is cancelled and the pending work is queued for immediate execution. Like flush_work(), this function only considers the last queueing instance of dwork.

Return

true if flush_work() waited for the work to finish execution, false if it was already idle.

bool cancel_delayed_work(struct delayed_work * dwork)

cancel a delayed work

Parameters

struct delayed_work * dwork
delayed_work to cancel

Description

Kill off a pending delayed_work.

Return

true if dwork was pending and canceled; false if it wasn’t pending.

Note

The work callback function may still be running on return, unless it returns true and the work doesn’t re-arm itself. Explicitly flush or use cancel_delayed_work_sync() to wait on it.

This function is safe to call from any context including IRQ handler.

bool cancel_delayed_work_sync(struct delayed_work * dwork)

cancel a delayed work and wait for it to finish

Parameters

struct delayed_work * dwork
the delayed work cancel

Description

This is cancel_work_sync() for delayed works.

Return

true if dwork was pending, false otherwise.

int execute_in_process_context(work_func_t fn, struct execute_work * ew)

reliably execute the routine with user context

Parameters

work_func_t fn
the function to execute
struct execute_work * ew
guaranteed storage for the execute work structure (must be available when the work executes)

Description

Executes the function immediately if process context is available, otherwise schedules the function for delayed execution.

Return

0 - function was executed
1 - function was scheduled for execution
void destroy_workqueue(struct workqueue_struct * wq)

safely terminate a workqueue

Parameters

struct workqueue_struct * wq
target workqueue

Description

Safely destroy a workqueue. All work currently pending will be done first.

void workqueue_set_max_active(struct workqueue_struct * wq, int max_active)

adjust max_active of a workqueue

Parameters

struct workqueue_struct * wq
target workqueue
int max_active
new max_active value.

Description

Set max_active of wq to max_active.

Context

Don’t call from IRQ context.

bool workqueue_congested(int cpu, struct workqueue_struct * wq)

test whether a workqueue is congested

Parameters

int cpu
CPU in question
struct workqueue_struct * wq
target workqueue

Description

Test whether wq‘s cpu workqueue for cpu is congested. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.

If cpu is WORK_CPU_UNBOUND, the test is performed on the local CPU. Note that both per-cpu and unbound workqueues may be associated with multiple pool_workqueues which have separate congested states. A workqueue being congested on one CPU doesn’t mean the workqueue is also contested on other CPUs / NUMA nodes.

Return

true if congested, false otherwise.

unsigned int work_busy(struct work_struct * work)

test whether a work is currently pending or running

Parameters

struct work_struct * work
the work to be tested

Description

Test whether work is currently pending or running. There is no synchronization around this function and the test result is unreliable and only useful as advisory hints or for debugging.

Return

OR’d bitmask of WORK_BUSY_* bits.

long work_on_cpu(int cpu, long (*fn) (void *, void * arg)

run a function in thread context on a particular cpu

Parameters

int cpu
the cpu to run on
long (*)(void *) fn
the function to run
void * arg
the function arg

Description

It is up to the caller to ensure that the cpu doesn’t go offline. The caller must not hold any locks which would prevent fn from completing.

Return

The value fn returns.

Internal Functions

int wait_task_stopped(struct wait_opts * wo, int ptrace, struct task_struct * p)

Wait for TASK_STOPPED or TASK_TRACED

Parameters

struct wait_opts * wo
wait options
int ptrace
is the wait for ptrace
struct task_struct * p
task to wait for

Description

Handle sys_wait4() work for p in state TASK_STOPPED or TASK_TRACED.

Context

read_lock(tasklist_lock), which is released if return value is non-zero. Also, grabs and releases p->sighand->siglock.

Return

0 if wait condition didn’t exist and search for other wait conditions should continue. Non-zero return, -errno on failure and p‘s pid on success, implies that tasklist_lock is released and wait condition search should terminate.

bool task_set_jobctl_pending(struct task_struct * task, unsigned long mask)

set jobctl pending bits

Parameters

struct task_struct * task
target task
unsigned long mask
pending bits to set

Description

Clear mask from task->jobctl. mask must be subset of JOBCTL_PENDING_MASK | JOBCTL_STOP_CONSUME | JOBCTL_STOP_SIGMASK | JOBCTL_TRAPPING. If stop signo is being set, the existing signo is cleared. If task is already being killed or exiting, this function becomes noop.

Context

Must be called with task->sighand->siglock held.

Return

true if mask is set, false if made noop because task was dying.

void task_clear_jobctl_trapping(struct task_struct * task)

clear jobctl trapping bit

Parameters

struct task_struct * task
target task

Description

If JOBCTL_TRAPPING is set, a ptracer is waiting for us to enter TRACED. Clear it and wake up the ptracer. Note that we don’t need any further locking. task->siglock guarantees that task->parent points to the ptracer.

Context

Must be called with task->sighand->siglock held.

void task_clear_jobctl_pending(struct task_struct * task, unsigned long mask)

clear jobctl pending bits

Parameters

struct task_struct * task
target task
unsigned long mask
pending bits to clear

Description

Clear mask from task->jobctl. mask must be subset of JOBCTL_PENDING_MASK. If JOBCTL_STOP_PENDING is being cleared, other STOP bits are cleared together.

If clearing of mask leaves no stop or trap pending, this function calls task_clear_jobctl_trapping().

Context

Must be called with task->sighand->siglock held.

bool task_participate_group_stop(struct task_struct * task)

participate in a group stop

Parameters

struct task_struct * task
task participating in a group stop

Description

task has JOBCTL_STOP_PENDING set and is participating in a group stop. Group stop states are cleared and the group stop count is consumed if JOBCTL_STOP_CONSUME was set. If the consumption completes the group stop, the appropriate ``SIGNAL_``* flags are set.

Context

Must be called with task->sighand->siglock held.

Return

true if group stop completion should be notified to the parent, false otherwise.

void ptrace_trap_notify(struct task_struct * t)

schedule trap to notify ptracer

Parameters

struct task_struct * t
tracee wanting to notify tracer

Description

This function schedules sticky ptrace trap which is cleared on the next TRAP_STOP to notify ptracer of an event. t must have been seized by ptracer.

If t is running, STOP trap will be taken. If trapped for STOP and ptracer is listening for events, tracee is woken up so that it can re-trap for the new event. If trapped otherwise, STOP trap will be eventually taken without returning to userland after the existing traps are finished by PTRACE_CONT.

Context

Must be called with task->sighand->siglock held.

void do_notify_parent_cldstop(struct task_struct * tsk, bool for_ptracer, int why)

notify parent of stopped/continued state change

Parameters

struct task_struct * tsk
task reporting the state change
bool for_ptracer
the notification is for ptracer
int why
CLD_{CONTINUED|STOPPED|TRAPPED} to report

Description

Notify tsk‘s parent that the stopped/continued state has changed. If for_ptracer is false, tsk‘s group leader notifies to its real parent. If true, tsk reports to tsk->parent which should be the ptracer.

Context

Must be called with tasklist_lock at least read locked.

bool do_signal_stop(int signr)

handle group stop for SIGSTOP and other stop signals

Parameters

int signr
signr causing group stop if initiating

Description

If JOBCTL_STOP_PENDING is not set yet, initiate group stop with signr and participate in it. If already set, participate in the existing group stop. If participated in a group stop (and thus slept), true is returned with siglock released.

If ptraced, this function doesn’t handle stop itself. Instead, JOBCTL_TRAP_STOP is scheduled and false is returned with siglock untouched. The caller must ensure that INTERRUPT trap handling takes places afterwards.

Context

Must be called with current->sighand->siglock held, which is released on true return.

Return

false if group stop is already cancelled or ptrace trap is scheduled. true if participated in group stop.

void do_jobctl_trap(void)

take care of ptrace jobctl traps

Parameters

void
no arguments

Description

When PT_SEIZED, it’s used for both group stop and explicit SEIZE/INTERRUPT traps. Both generate PTRACE_EVENT_STOP trap with accompanying siginfo. If stopped, lower eight bits of exit_code contain the stop signal; otherwise, SIGTRAP.

When !PT_SEIZED, it’s used only for group stop trap with stop signal number as exit_code and no siginfo.

Context

Must be called with current->sighand->siglock held, which may be released and re-acquired before returning with intervening sleep.

void signal_delivered(struct ksignal * ksig, int stepping)

Parameters

struct ksignal * ksig
kernel signal struct
int stepping
nonzero if debugger single-step or block-step in use

Description

This function should be called when a signal has successfully been delivered. It updates the blocked signals accordingly (ksig->ka.sa.sa_mask is always blocked, and the signal itself is blocked unless SA_NODEFER is set in ksig->ka.sa.sa_flags. Tracing is notified.

long sys_restart_syscall(void)

restart a system call

Parameters

void
no arguments
void set_current_blocked(sigset_t * newset)

change current->blocked mask

Parameters

sigset_t * newset
new mask

Description

It is wrong to change ->blocked directly, this helper should be used to ensure the process can’t miss a shared signal we are going to block.

long sys_rt_sigprocmask(int how, sigset_t __user * nset, sigset_t __user * oset, size_t sigsetsize)

change the list of currently blocked signals

Parameters

int how
whether to add, remove, or set signals
sigset_t __user * nset
stores pending signals
sigset_t __user * oset
previous value of signal mask if non-null
size_t sigsetsize
size of sigset_t type
long sys_rt_sigpending(sigset_t __user * uset, size_t sigsetsize)

examine a pending signal that has been raised while blocked

Parameters

sigset_t __user * uset
stores pending signals
size_t sigsetsize
size of sigset_t type or larger
int do_sigtimedwait(const sigset_t * which, siginfo_t * info, const struct timespec * ts)

wait for queued signals specified in which

Parameters

const sigset_t * which
queued signals to wait for
siginfo_t * info
if non-null, the signal’s siginfo is returned here
const struct timespec * ts
upper bound on process time suspension
long sys_rt_sigtimedwait(const sigset_t __user * uthese, siginfo_t __user * uinfo, const struct timespec __user * uts, size_t sigsetsize)

synchronously wait for queued signals specified in uthese

Parameters

const sigset_t __user * uthese
queued signals to wait for
siginfo_t __user * uinfo
if non-null, the signal’s siginfo is returned here
const struct timespec __user * uts
upper bound on process time suspension
size_t sigsetsize
size of sigset_t type
long sys_kill(pid_t pid, int sig)

send a signal to a process

Parameters

pid_t pid
the PID of the process
int sig
signal to be sent
long sys_tgkill(pid_t tgid, pid_t pid, int sig)

send signal to one specific thread

Parameters

pid_t tgid
the thread group ID of the thread
pid_t pid
the PID of the thread
int sig
signal to be sent

Description

This syscall also checks the tgid and returns -ESRCH even if the PID exists but it’s not belonging to the target process anymore. This method solves the problem of threads exiting and PIDs getting reused.
long sys_tkill(pid_t pid, int sig)

send signal to one specific task

Parameters

pid_t pid
the PID of the task
int sig
signal to be sent

Description

Send a signal to only one task, even if it’s a CLONE_THREAD task.
long sys_rt_sigqueueinfo(pid_t pid, int sig, siginfo_t __user * uinfo)

send signal information to a signal

Parameters

pid_t pid
the PID of the thread
int sig
signal to be sent
siginfo_t __user * uinfo
signal info to be sent
long sys_sigpending(old_sigset_t __user * set)

examine pending signals

Parameters

old_sigset_t __user * set
where mask of pending signal is returned
long sys_sigprocmask(int how, old_sigset_t __user * nset, old_sigset_t __user * oset)

examine and change blocked signals

Parameters

int how
whether to add, remove, or set signals
old_sigset_t __user * nset
signals to add or remove (if non-null)
old_sigset_t __user * oset
previous value of signal mask if non-null

Description

Some platforms have their own version with special arguments; others support only sys_rt_sigprocmask.

long sys_rt_sigaction(int sig, const struct sigaction __user * act, struct sigaction __user * oact, size_t sigsetsize)

alter an action taken by a process

Parameters

int sig
signal to be sent
const struct sigaction __user * act
new sigaction
struct sigaction __user * oact
used to save the previous sigaction
size_t sigsetsize
size of sigset_t type
long sys_rt_sigsuspend(sigset_t __user * unewset, size_t sigsetsize)

replace the signal mask for a value with the unewset value until a signal is received

Parameters

sigset_t __user * unewset
new signal mask value
size_t sigsetsize
size of sigset_t type
kthread_run(threadfn, data, namefmt, ...)

create and wake a thread.

Parameters

threadfn
the function to run until signal_pending(current).
data
data ptr for threadfn.
namefmt
printf-style name for the thread.
...
variable arguments

Description

Convenient wrapper for kthread_create() followed by wake_up_process(). Returns the kthread or ERR_PTR(-ENOMEM).

bool kthread_should_stop(void)

should this kthread return now?

Parameters

void
no arguments

Description

When someone calls kthread_stop() on your kthread, it will be woken and this will return true. You should then return, and your return value will be passed through to kthread_stop().

bool kthread_should_park(void)

should this kthread park now?

Parameters

void
no arguments

Description

When someone calls kthread_park() on your kthread, it will be woken and this will return true. You should then do the necessary cleanup and call kthread_parkme()

Similar to kthread_should_stop(), but this keeps the thread alive and in a park position. kthread_unpark() “restarts” the thread and calls the thread function again.

bool kthread_freezable_should_stop(bool * was_frozen)

should this freezable kthread return now?

Parameters

bool * was_frozen
optional out parameter, indicates whether current was frozen

Description

kthread_should_stop() for freezable kthreads, which will enter refrigerator if necessary. This function is safe from kthread_stop() / freezer deadlock and freezable kthreads should use this function instead of calling try_to_freeze() directly.

struct task_struct * kthread_create_on_node(int (*threadfn) (void *data, void * data, int node, const char namefmt[], ...)

create a kthread.

Parameters

int (*)(void *data) threadfn
the function to run until signal_pending(current).
void * data
data ptr for threadfn.
int node
task and thread structures for the thread are allocated on this node
const char namefmt[]
undescribed
...
variable arguments

Description

This helper function creates and names a kernel thread. The thread will be stopped: use wake_up_process() to start it. See also kthread_run(). The new thread has SCHED_NORMAL policy and is affine to all CPUs.

If thread is going to be bound on a particular cpu, give its node in node, to get NUMA affinity for kthread stack, or else give NUMA_NO_NODE. When woken, the thread will run @:c:func:threadfn() with data as its argument. @:c:func:threadfn() can either call do_exit() directly if it is a standalone thread for which no one will call kthread_stop(), or return when ‘kthread_should_stop()‘ is true (which means kthread_stop() has been called). The return value should be zero or a negative error number; it will be passed to kthread_stop().

Returns a task_struct or ERR_PTR(-ENOMEM) or ERR_PTR(-EINTR).

void kthread_bind(struct task_struct * p, unsigned int cpu)

bind a just-created kthread to a cpu.

Parameters

struct task_struct * p
thread created by kthread_create().
unsigned int cpu
cpu (might not be online, must be possible) for k to run on.

Description

This function is equivalent to set_cpus_allowed(), except that cpu doesn’t need to be online, and the thread must be stopped (i.e., just returned from kthread_create()).

void kthread_unpark(struct task_struct * k)

unpark a thread created by kthread_create().

Parameters

struct task_struct * k
thread created by kthread_create().

Description

Sets kthread_should_park() for k to return false, wakes it, and waits for it to return. If the thread is marked percpu then its bound to the cpu again.

int kthread_park(struct task_struct * k)

park a thread created by kthread_create().

Parameters

struct task_struct * k
thread created by kthread_create().

Description

Sets kthread_should_park() for k to return true, wakes it, and waits for it to return. This can also be called after kthread_create() instead of calling wake_up_process(): the thread will park without calling threadfn().

Returns 0 if the thread is parked, -ENOSYS if the thread exited. If called by the kthread itself just the park bit is set.

int kthread_stop(struct task_struct * k)

stop a thread created by kthread_create().

Parameters

struct task_struct * k
thread created by kthread_create().

Description

Sets kthread_should_stop() for k to return true, wakes it, and waits for it to exit. This can also be called after kthread_create() instead of calling wake_up_process(): the thread will exit without calling threadfn().

If threadfn() may call do_exit() itself, the caller must ensure task_struct can’t go away.

Returns the result of threadfn(), or -EINTR if wake_up_process() was never called.

int kthread_worker_fn(void * worker_ptr)

kthread function to process kthread_worker

Parameters

void * worker_ptr
pointer to initialized kthread_worker

Description

This function can be used as threadfn to kthread_create() or kthread_run() with worker_ptr argument pointing to an initialized kthread_worker. The started kthread will process work_list until the it is stopped with kthread_stop(). A kthread can also call this function directly after extra initialization.

Different kthreads can be used for the same kthread_worker as long as there’s only one kthread attached to it at any given time. A kthread_worker without an attached kthread simply collects queued kthread_works.

bool queue_kthread_work(struct kthread_worker * worker, struct kthread_work * work)

queue a kthread_work

Parameters

struct kthread_worker * worker
target kthread_worker
struct kthread_work * work
kthread_work to queue

Description

Queue work to work processor task for async execution. task must have been created with kthread_worker_create(). Returns true if work was successfully queued, false if it was already pending.

void flush_kthread_work(struct kthread_work * work)

flush a kthread_work

Parameters

struct kthread_work * work
work to flush

Description

If work is queued or executing, wait for it to finish execution.

void flush_kthread_worker(struct kthread_worker * worker)

flush all current works on a kthread_worker

Parameters

struct kthread_worker * worker
worker to flush

Description

Wait until all currently executing or pending works on worker are finished.

Kernel objects manipulation

char * kobject_get_path(struct kobject * kobj, gfp_t gfp_mask)

generate and return the path associated with a given kobj and kset pair.

Parameters

struct kobject * kobj
kobject in question, with which to build the path
gfp_t gfp_mask
the allocation type used to allocate the path

Description

The result must be freed by the caller with kfree().

int kobject_set_name(struct kobject * kobj, const char * fmt, ...)

Set the name of a kobject

Parameters

struct kobject * kobj
struct kobject to set the name of
const char * fmt
format string used to build the name
...
variable arguments

Description

This sets the name of the kobject. If you have already added the kobject to the system, you must call kobject_rename() in order to change the name of the kobject.

void kobject_init(struct kobject * kobj, struct kobj_type * ktype)

initialize a kobject structure

Parameters

struct kobject * kobj
pointer to the kobject to initialize
struct kobj_type * ktype
pointer to the ktype for this kobject.

Description

This function will properly initialize a kobject such that it can then be passed to the kobject_add() call.

After this function is called, the kobject MUST be cleaned up by a call to kobject_put(), not by a call to kfree directly to ensure that all of the memory is cleaned up properly.

int kobject_add(struct kobject * kobj, struct kobject * parent, const char * fmt, ...)

the main kobject add function

Parameters

struct kobject * kobj
the kobject to add
struct kobject * parent
pointer to the parent of the kobject.
const char * fmt
format to name the kobject with.
...
variable arguments

Description

The kobject name is set and added to the kobject hierarchy in this function.

If parent is set, then the parent of the kobj will be set to it. If parent is NULL, then the parent of the kobj will be set to the kobject associated with the kset assigned to this kobject. If no kset is assigned to the kobject, then the kobject will be located in the root of the sysfs tree.

If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. Under no instance should the kobject that is passed to this function be directly freed with a call to kfree(), that can leak memory.

Note, no “add” uevent will be created with this call, the caller should set up all of the necessary sysfs files for the object and then call kobject_uevent() with the UEVENT_ADD parameter to ensure that userspace is properly notified of this kobject’s creation.

int kobject_init_and_add(struct kobject * kobj, struct kobj_type * ktype, struct kobject * parent, const char * fmt, ...)

initialize a kobject structure and add it to the kobject hierarchy

Parameters

struct kobject * kobj
pointer to the kobject to initialize
struct kobj_type * ktype
pointer to the ktype for this kobject.
struct kobject * parent
pointer to the parent of this kobject.
const char * fmt
the name of the kobject.
...
variable arguments

Description

This function combines the call to kobject_init() and kobject_add(). The same type of error handling after a call to kobject_add() and kobject lifetime rules are the same here.

int kobject_rename(struct kobject * kobj, const char * new_name)

change the name of an object

Parameters

struct kobject * kobj
object in question.
const char * new_name
object’s new name

Description

It is the responsibility of the caller to provide mutual exclusion between two different calls of kobject_rename on the same kobject and to ensure that new_name is valid and won’t conflict with other kobjects.

int kobject_move(struct kobject * kobj, struct kobject * new_parent)

move object to another parent

Parameters

struct kobject * kobj
object in question.
struct kobject * new_parent
object’s new parent (can be NULL)
void kobject_del(struct kobject * kobj)

unlink kobject from hierarchy.

Parameters

struct kobject * kobj
object.
struct kobject * kobject_get(struct kobject * kobj)

increment refcount for object.

Parameters

struct kobject * kobj
object.
void kobject_put(struct kobject * kobj)

decrement refcount for object.

Parameters

struct kobject * kobj
object.

Description

Decrement the refcount, and if 0, call kobject_cleanup().

struct kobject * kobject_create_and_add(const char * name, struct kobject * parent)

create a struct kobject dynamically and register it with sysfs

Parameters

const char * name
the name for the kobject
struct kobject * parent
the parent kobject of this kobject, if any.

Description

This function creates a kobject structure dynamically and registers it with sysfs. When you are finished with this structure, call kobject_put() and the structure will be dynamically freed when it is no longer being used.

If the kobject was not able to be created, NULL will be returned.

int kset_register(struct kset * k)

initialize and add a kset.

Parameters

struct kset * k
kset.
void kset_unregister(struct kset * k)

remove a kset.

Parameters

struct kset * k
kset.
struct kobject * kset_find_obj(struct kset * kset, const char * name)

search for object in kset.

Parameters

struct kset * kset
kset we’re looking in.
const char * name
object’s name.

Description

Lock kset via kset->subsys, and iterate over kset->list, looking for a matching kobject. If matching object is found take a reference and return the object.

struct kset * kset_create_and_add(const char * name, const struct kset_uevent_ops * uevent_ops, struct kobject * parent_kobj)

create a struct kset dynamically and add it to sysfs

Parameters

const char * name
the name for the kset
const struct kset_uevent_ops * uevent_ops
a struct kset_uevent_ops for the kset
struct kobject * parent_kobj
the parent kobject of this kset, if any.

Description

This function creates a kset structure dynamically and registers it with sysfs. When you are finished with this structure, call kset_unregister() and the structure will be dynamically freed when it is no longer being used.

If the kset was not able to be created, NULL will be returned.

Kernel utility functions

upper_32_bits(n)

return bits 32-63 of a number

Parameters

n
the number we’re accessing

Description

A basic shift-right of a 64- or 32-bit quantity. Use this to suppress the “right shift count >= width of type” warning when that quantity is 32-bits.

lower_32_bits(n)

return bits 0-31 of a number

Parameters

n
the number we’re accessing
might_sleep()

annotation for functions that can sleep

Parameters

Description

this macro will print a stack trace if it is executed in an atomic context (spinlock, irq-handler, ...).

This is a useful debugging help to be able to catch problems early and not be bitten later when the calling function happens to sleep when it is not supposed to.

abs(x)

return absolute value of an argument

Parameters

x
the value. If it is unsigned type, it is converted to signed type first. char is treated as if it was signed (regardless of whether it really is) but the macro’s return type is preserved as char.

Return

an absolute value of x.

u32 reciprocal_scale(u32 val, u32 ep_ro)

“scale” a value into range [0, ep_ro)

Parameters

u32 val
value
u32 ep_ro
right open interval endpoint

Description

Perform a “reciprocal multiplication” in order to “scale” a value into range [0, ep_ro), where the upper interval endpoint is right-open. This is useful, e.g. for accessing a index of an array containing ep_ro elements, for example. Think of it as sort of modulus, only that the result isn’t that of modulo. ;) Note that if initial input is a small value, then result will return 0.

Return

a result based on val in interval [0, ep_ro).

int kstrtoul(const char * s, unsigned int base, unsigned long * res)

convert a string to an unsigned long

Parameters

const char * s
The start of the string. The string must be null-terminated, and may also include a single newline before its terminating null. The first character may also be a plus sign, but not a minus sign.
unsigned int base
The number base to use. The maximum supported base is 16. If base is given as 0, then the base of the string is automatically detected with the conventional semantics - If it begins with 0x the number will be parsed as a hexadecimal (case insensitive), if it otherwise begins with 0, it will be parsed as an octal number. Otherwise it will be parsed as a decimal.
unsigned long * res
Where to write the result of the conversion on success.

Description

Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error. Used as a replacement for the obsolete simple_strtoull. Return code must be checked.

int kstrtol(const char * s, unsigned int base, long * res)

convert a string to a long

Parameters

const char * s
The start of the string. The string must be null-terminated, and may also include a single newline before its terminating null. The first character may also be a plus sign or a minus sign.
unsigned int base
The number base to use. The maximum supported base is 16. If base is given as 0, then the base of the string is automatically detected with the conventional semantics - If it begins with 0x the number will be parsed as a hexadecimal (case insensitive), if it otherwise begins with 0, it will be parsed as an octal number. Otherwise it will be parsed as a decimal.
long * res
Where to write the result of the conversion on success.

Description

Returns 0 on success, -ERANGE on overflow and -EINVAL on parsing error. Used as a replacement for the obsolete simple_strtoull. Return code must be checked.

trace_printk(fmt, ...)

printf formatting in the ftrace buffer

Parameters

fmt
the printf format for printing
...
variable arguments

Note

__trace_printk is an internal function for trace_printk and
the ip is passed in via the trace_printk macro.

This function allows a kernel developer to debug fast path sections that printk is not appropriate for. By scattering in various printk like tracing in the code, a developer can quickly see where problems are occurring.

This is intended as a debugging tool for the developer only. Please refrain from leaving trace_printks scattered around in your code. (Extra memory is used for special buffers that are allocated when trace_printk() is used)

A little optization trick is done here. If there’s only one argument, there’s no need to scan the string for printf formats. The trace_puts() will suffice. But how can we take advantage of using trace_puts() when trace_printk() has only one argument? By stringifying the args and checking the size we can tell whether or not there are args. __stringify((__VA_ARGS__)) will turn into “()0” with a size of 3 when there are no args, anything else will be bigger. All we need to do is define a string to this, and then take its size and compare to 3. If it’s bigger, use do_trace_printk() otherwise, optimize it to trace_puts(). Then just let gcc optimize the rest.

trace_puts(str)

write a string into the ftrace buffer

Parameters

str
the string to record

Note

__trace_bputs is an internal function for trace_puts and
the ip is passed in via the trace_puts macro.

This is similar to trace_printk() but is made for those really fast paths that a developer wants the least amount of “Heisenbug” affects, where the processing of the print format is still too much.

This function allows a kernel developer to debug fast path sections that printk is not appropriate for. By scattering in various printk like tracing in the code, a developer can quickly see where problems are occurring.

This is intended as a debugging tool for the developer only. Please refrain from leaving trace_puts scattered around in your code. (Extra memory is used for special buffers that are allocated when trace_puts() is used)

Return

0 if nothing was written, positive # if string was.
(1 when __trace_bputs is used, strlen(str) when __trace_puts is used)
min_not_zero(x, y)

return the minimum that is _not_ zero, unless both are zero

Parameters

x
value1
y
value2
clamp(val, lo, hi)

return a value clamped to a given range with strict typechecking

Parameters

val
current value
lo
lowest allowable value
hi
highest allowable value

Description

This macro does strict typechecking of lo/hi to make sure they are of the same type as val. See the unnecessary pointer comparisons.

clamp_t(type, val, lo, hi)

return a value clamped to a given range using a given type

Parameters

type
the type of variable to use
val
current value
lo
minimum allowable value
hi
maximum allowable value

Description

This macro does no typechecking and uses temporary variables of type ‘type’ to make all the comparisons.

clamp_val(val, lo, hi)

return a value clamped to a given range using val’s type

Parameters

val
current value
lo
minimum allowable value
hi
maximum allowable value

Description

This macro does no typechecking and uses temporary variables of whatever type the input argument ‘val’ is. This is useful when val is an unsigned type and min and max are literals that will otherwise be assigned a signed integer type.

container_of(ptr, type, member)

cast a member of a structure out to the containing structure

Parameters

ptr
the pointer to the member.
type
the type of the container struct this is embedded in.
member
the name of the member within the struct.
__visible int printk(const char * fmt, ...)

print a kernel message

Parameters

const char * fmt
format string
...
variable arguments

Description

This is printk(). It can be called from any context. We want it to work.

We try to grab the console_lock. If we succeed, it’s easy - we log the output and call the console drivers. If we fail to get the semaphore, we place the output into the log buffer and return. The current holder of the console_sem will notice the new output in console_unlock(); and will send it to the consoles before releasing the lock.

One effect of this deferred printing is that code which calls printk() and then changes console_loglevel may break. This is because console_loglevel is inspected when the actual printing occurs.

See also: printf(3)

See the vsnprintf() documentation for format string extensions over C99.

void console_lock(void)

lock the console system for exclusive use.

Parameters

void
no arguments

Description

Acquires a lock which guarantees that the caller has exclusive access to the console system and the console_drivers list.

Can sleep, returns nothing.

int console_trylock(void)

try to lock the console system for exclusive use.

Parameters

void
no arguments

Description

Try to acquire a lock which guarantees that the caller has exclusive access to the console system and the console_drivers list.

returns 1 on success, and 0 on failure to acquire the lock.

void console_unlock(void)

unlock the console system

Parameters

void
no arguments

Description

Releases the console_lock which the caller holds on the console system and the console driver list.

While the console_lock was held, console output may have been buffered by printk(). If this is the case, console_unlock(); emits the output prior to releasing the lock.

If there is output waiting, we wake /dev/kmsg and syslog() users.

console_unlock(); may be called from any context.

void __sched console_conditional_schedule(void)

yield the CPU if required

Parameters

void
no arguments

Description

If the console code is currently allowed to sleep, and if this CPU should yield the CPU to another task, do so here.

Must be called within console_lock();.

bool printk_timed_ratelimit(unsigned long * caller_jiffies, unsigned int interval_msecs)

caller-controlled printk ratelimiting

Parameters

unsigned long * caller_jiffies
pointer to caller’s state
unsigned int interval_msecs
minimum interval between prints

Description

printk_timed_ratelimit() returns true if more than interval_msecs milliseconds have elapsed since the last time printk_timed_ratelimit() returned true.

int kmsg_dump_register(struct kmsg_dumper * dumper)

register a kernel log dumper.

Parameters

struct kmsg_dumper * dumper
pointer to the kmsg_dumper structure

Description

Adds a kernel log dumper to the system. The dump callback in the structure will be called when the kernel oopses or panics and must be set. Returns zero on success and -EINVAL or -EBUSY otherwise.

int kmsg_dump_unregister(struct kmsg_dumper * dumper)

unregister a kmsg dumper.

Parameters

struct kmsg_dumper * dumper
pointer to the kmsg_dumper structure

Description

Removes a dump device from the system. Returns zero on success and -EINVAL otherwise.

bool kmsg_dump_get_line(struct kmsg_dumper * dumper, bool syslog, char * line, size_t size, size_t * len)

retrieve one kmsg log line

Parameters

struct kmsg_dumper * dumper
registered kmsg dumper
bool syslog
include the “<4>” prefixes
char * line
buffer to copy the line to
size_t size
maximum size of the buffer
size_t * len
length of line placed into buffer

Description

Start at the beginning of the kmsg buffer, with the oldest kmsg record, and copy one record into the provided buffer.

Consecutive calls will return the next available record moving towards the end of the buffer with the youngest messages.

A return value of FALSE indicates that there are no more records to read.

bool kmsg_dump_get_buffer(struct kmsg_dumper * dumper, bool syslog, char * buf, size_t size, size_t * len)

copy kmsg log lines

Parameters

struct kmsg_dumper * dumper
registered kmsg dumper
bool syslog
include the “<4>” prefixes
char * buf
buffer to copy the line to
size_t size
maximum size of the buffer
size_t * len
length of line placed into buffer

Description

Start at the end of the kmsg buffer and fill the provided buffer with as many of the the youngest kmsg records that fit into it. If the buffer is large enough, all available kmsg records will be copied with a single call.

Consecutive calls will fill the buffer with the next block of available older records, not including the earlier retrieved ones.

A return value of FALSE indicates that there are no more records to read.

void kmsg_dump_rewind(struct kmsg_dumper * dumper)

reset the interator

Parameters

struct kmsg_dumper * dumper
registered kmsg dumper

Description

Reset the dumper’s iterator so that kmsg_dump_get_line() and kmsg_dump_get_buffer() can be called again and used multiple times within the same dumper.:c:func:dump() callback.

void panic(const char * fmt, ...)

halt the system

Parameters

const char * fmt
The text string to print
...
variable arguments

Description

Display a message, then perform cleanups.

This function never returns.

void add_taint(unsigned flag, enum lockdep_ok lockdep_ok)

Parameters

unsigned flag
one of the TAINT_* constants.
enum lockdep_ok lockdep_ok
whether lock debugging is still OK.

Description

If something bad has gone wrong, you’ll want lockdebug_ok = false, but for some notewortht-but-not-corrupting cases, it can be set to true.

int init_srcu_struct(struct srcu_struct * sp)

initialize a sleep-RCU structure

Parameters

struct srcu_struct * sp
structure to initialize.

Description

Must invoke this on a given srcu_struct before passing that srcu_struct to any other function. Each srcu_struct represents a separate domain of SRCU protection.

void cleanup_srcu_struct(struct srcu_struct * sp)

deconstruct a sleep-RCU structure

Parameters

struct srcu_struct * sp
structure to clean up.

Description

Must invoke this after you are finished using a given srcu_struct that was initialized via init_srcu_struct(), else you leak memory.

void synchronize_srcu(struct srcu_struct * sp)

wait for prior SRCU read-side critical-section completion

Parameters

struct srcu_struct * sp
srcu_struct with which to synchronize.

Description

Wait for the count to drain to zero of both indexes. To avoid the possible starvation of synchronize_srcu(), it waits for the count of the index=((->completed & 1) ^ 1) to drain to zero at first, and then flip the completed and wait for the count of the other index.

Can block; must be called from process context.

Note that it is illegal to call synchronize_srcu() from the corresponding SRCU read-side critical section; doing so will result in deadlock. However, it is perfectly legal to call synchronize_srcu() on one srcu_struct from some other srcu_struct’s read-side critical section, as long as the resulting graph of srcu_structs is acyclic.

There are memory-ordering constraints implied by synchronize_srcu(). On systems with more than one CPU, when synchronize_srcu() returns, each CPU is guaranteed to have executed a full memory barrier since the end of its last corresponding SRCU-sched read-side critical section whose beginning preceded the call to synchronize_srcu(). In addition, each CPU having an SRCU read-side critical section that extends beyond the return from synchronize_srcu() is guaranteed to have executed a full memory barrier after the beginning of synchronize_srcu() and before the beginning of that SRCU read-side critical section. Note that these guarantees include CPUs that are offline, idle, or executing in user mode, as well as CPUs that are executing in the kernel.

Furthermore, if CPU A invoked synchronize_srcu(), which returned to its caller on CPU B, then both CPU A and CPU B are guaranteed to have executed a full memory barrier during the execution of synchronize_srcu(). This guarantee applies even if CPU A and CPU B are the same CPU, but again only if the system has more than one CPU.

Of course, these memory-ordering guarantees apply only when synchronize_srcu(), srcu_read_lock(), and srcu_read_unlock() are passed the same srcu_struct structure.

void synchronize_srcu_expedited(struct srcu_struct * sp)

Brute-force SRCU grace period

Parameters

struct srcu_struct * sp
srcu_struct with which to synchronize.

Description

Wait for an SRCU grace period to elapse, but be more aggressive about spinning rather than blocking when waiting.

Note that synchronize_srcu_expedited() has the same deadlock and memory-ordering properties as does synchronize_srcu().

void srcu_barrier(struct srcu_struct * sp)

Wait until all in-flight call_srcu() callbacks complete.

Parameters

struct srcu_struct * sp
srcu_struct on which to wait for in-flight callbacks.
unsigned long srcu_batches_completed(struct srcu_struct * sp)

return batches completed.

Parameters

struct srcu_struct * sp
srcu_struct on which to report batch completion.

Description

Report the number of batches, correlated with, but not necessarily precisely the same as, the number of grace periods that have elapsed.

void rcu_idle_enter(void)

inform RCU that current CPU is entering idle

Parameters

void
no arguments

Description

Enter idle mode, in other words, -leave- the mode in which RCU read-side critical sections can occur. (Though RCU read-side critical sections can occur in irq handlers in idle, a possibility handled by irq_enter() and irq_exit().)

We crowbar the ->dynticks_nesting field to zero to allow for the possibility of usermode upcalls having messed up our count of interrupt nesting level during the prior busy period.

void rcu_idle_exit(void)

inform RCU that current CPU is leaving idle

Parameters

void
no arguments

Description

Exit idle mode, in other words, -enter- the mode in which RCU read-side critical sections can occur.

We crowbar the ->dynticks_nesting field to DYNTICK_TASK_NEST to allow for the possibility of usermode upcalls messing up our count of interrupt nesting level during the busy period that is just now starting.

bool notrace rcu_is_watching(void)

see if RCU thinks that the current CPU is idle

Parameters

void
no arguments

Description

If the current CPU is in its idle loop and is neither in an interrupt or NMI handler, return true.

void synchronize_sched(void)

wait until an rcu-sched grace period has elapsed.

Parameters

void
no arguments

Description

Control will return to the caller some time after a full rcu-sched grace period has elapsed, in other words after all currently executing rcu-sched read-side critical sections have completed. These read-side critical sections are delimited by rcu_read_lock_sched() and rcu_read_unlock_sched(), and may be nested. Note that preempt_disable(), local_irq_disable(), and so on may be used in place of rcu_read_lock_sched().

This means that all preempt_disable code sequences, including NMI and non-threaded hardware-interrupt handlers, in progress on entry will have completed before this primitive returns. However, this does not guarantee that softirq handlers will have completed, since in some kernels, these handlers can run in process context, and can block.

Note that this guarantee implies further memory-ordering guarantees. On systems with more than one CPU, when synchronize_sched() returns, each CPU is guaranteed to have executed a full memory barrier since the end of its last RCU-sched read-side critical section whose beginning preceded the call to synchronize_sched(). In addition, each CPU having an RCU read-side critical section that extends beyond the return from synchronize_sched() is guaranteed to have executed a full memory barrier after the beginning of synchronize_sched() and before the beginning of that RCU read-side critical section. Note that these guarantees include CPUs that are offline, idle, or executing in user mode, as well as CPUs that are executing in the kernel.

Furthermore, if CPU A invoked synchronize_sched(), which returned to its caller on CPU B, then both CPU A and CPU B are guaranteed to have executed a full memory barrier during the execution of synchronize_sched() – even if CPU A and CPU B are the same CPU (but again only if the system has more than one CPU).

This primitive provides the guarantees made by the (now removed) synchronize_kernel() API. In contrast, synchronize_rcu() only guarantees that rcu_read_lock() sections will have completed. In “classic RCU”, these two guarantees happen to be one and the same, but can differ in realtime RCU implementations.

void synchronize_rcu_bh(void)

wait until an rcu_bh grace period has elapsed.

Parameters

void
no arguments

Description

Control will return to the caller some time after a full rcu_bh grace period has elapsed, in other words after all currently executing rcu_bh read-side critical sections have completed. RCU read-side critical sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), and may be nested.

See the description of synchronize_sched() for more detailed information on memory ordering guarantees.

unsigned long get_state_synchronize_rcu(void)

Snapshot current RCU state

Parameters

void
no arguments

Description

Returns a cookie that is used by a later call to cond_synchronize_rcu() to determine whether or not a full grace period has elapsed in the meantime.

void cond_synchronize_rcu(unsigned long oldstate)

Conditionally wait for an RCU grace period

Parameters

unsigned long oldstate
return value from earlier call to get_state_synchronize_rcu()

Description

If a full RCU grace period has elapsed since the earlier call to get_state_synchronize_rcu(), just return. Otherwise, invoke synchronize_rcu() to wait for a full grace period.

Yes, this function does not take counter wrap into account. But counter wrap is harmless. If the counter wraps, we have waited for more than 2 billion grace periods (and way more on a 64-bit system!), so waiting for one additional grace period should be just fine.

unsigned long get_state_synchronize_sched(void)

Snapshot current RCU-sched state

Parameters

void
no arguments

Description

Returns a cookie that is used by a later call to cond_synchronize_sched() to determine whether or not a full grace period has elapsed in the meantime.

void cond_synchronize_sched(unsigned long oldstate)

Conditionally wait for an RCU-sched grace period

Parameters

unsigned long oldstate
return value from earlier call to get_state_synchronize_sched()

Description

If a full RCU-sched grace period has elapsed since the earlier call to get_state_synchronize_sched(), just return. Otherwise, invoke synchronize_sched() to wait for a full grace period.

Yes, this function does not take counter wrap into account. But counter wrap is harmless. If the counter wraps, we have waited for more than 2 billion grace periods (and way more on a 64-bit system!), so waiting for one additional grace period should be just fine.

void synchronize_sched_expedited(void)

Brute-force RCU-sched grace period

Parameters

void
no arguments

Description

Wait for an RCU-sched grace period to elapse, but use a “big hammer” approach to force the grace period to end quickly. This consumes significant time on all CPUs and is unfriendly to real-time workloads, so is thus not recommended for any sort of common-case code. In fact, if you are using synchronize_sched_expedited() in a loop, please restructure your code to batch your updates, and then use a single synchronize_sched() instead.

This implementation can be thought of as an application of sequence locking to expedited grace periods, but using the sequence counter to determine when someone else has already done the work instead of for retrying readers.

void rcu_barrier_bh(void)

Wait until all in-flight call_rcu_bh() callbacks complete.

Parameters

void
no arguments
void rcu_barrier_sched(void)

Wait for in-flight call_rcu_sched() callbacks.

Parameters

void
no arguments
void synchronize_rcu(void)

wait until a grace period has elapsed.

Parameters

void
no arguments

Description

Control will return to the caller some time after a full grace period has elapsed, in other words after all currently executing RCU read-side critical sections have completed. Note, however, that upon return from synchronize_rcu(), the caller might well be executing concurrently with new RCU read-side critical sections that began while synchronize_rcu() was waiting. RCU read-side critical sections are delimited by rcu_read_lock() and rcu_read_unlock(), and may be nested.

See the description of synchronize_sched() for more detailed information on memory ordering guarantees.

void synchronize_rcu_expedited(void)

Brute-force RCU grace period

Parameters

void
no arguments

Description

Wait for an RCU-preempt grace period, but expedite it. The basic idea is to IPI all non-idle non-nohz online CPUs. The IPI handler checks whether the CPU is in an RCU-preempt critical section, and if so, it sets a flag that causes the outermost rcu_read_unlock() to report the quiescent state. On the other hand, if the CPU is not in an RCU read-side critical section, the IPI handler reports the quiescent state immediately.

Although this is a greate improvement over previous expedited implementations, it is still unfriendly to real-time workloads, so is thus not recommended for any sort of common-case code. In fact, if you are using synchronize_rcu_expedited() in a loop, please restructure your code to batch your updates, and then Use a single synchronize_rcu() instead.

void rcu_barrier(void)

Wait until all in-flight call_rcu() callbacks complete.

Parameters

void
no arguments

Description

Note that this primitive does not necessarily wait for an RCU grace period to complete. For example, if there are no RCU callbacks queued anywhere in the system, then rcu_barrier() is within its rights to return immediately, without waiting for anything, much less an RCU grace period.

int rcu_read_lock_sched_held(void)

might we be in RCU-sched read-side critical section?

Parameters

void
no arguments

Description

If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an RCU-sched read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU-sched read-side critical section unless it can prove otherwise. Note that disabling of preemption (including disabling irqs) counts as an RCU-sched read-side critical section. This is useful for debug checks in functions that required that they be called within an RCU-sched read-side critical section.

Check debug_lockdep_rcu_enabled() to prevent false positives during boot and while lockdep is disabled.

Note that if the CPU is in the idle loop from an RCU point of view (ie: that we are in the section between rcu_idle_enter() and rcu_idle_exit()) then rcu_read_lock_held() returns false even if the CPU did an rcu_read_lock(). The reason for this is that RCU ignores CPUs that are in such a section, considering these as in extended quiescent state, so such a CPU is effectively never in an RCU read-side critical section regardless of what RCU primitives it invokes. This state of affairs is required — we need to keep an RCU-free window in idle where the CPU may possibly enter into low power mode. This way we can notice an extended quiescent state to other CPUs that started a grace period. Otherwise we would delay any grace period as long as we run in the idle task.

Similarly, we avoid claiming an SRCU read lock held if the current CPU is offline.

void rcu_expedite_gp(void)

Expedite future RCU grace periods

Parameters

void
no arguments

Description

After a call to this function, future calls to synchronize_rcu() and friends act as the corresponding synchronize_rcu_expedited() function had instead been called.

void rcu_unexpedite_gp(void)

Cancel prior rcu_expedite_gp() invocation

Parameters

void
no arguments

Description

Undo a prior call to rcu_expedite_gp(). If all prior calls to rcu_expedite_gp() are undone by a subsequent call to rcu_unexpedite_gp(), and if the rcu_expedited sysfs/boot parameter is not set, then all subsequent calls to synchronize_rcu() and friends will return to their normal non-expedited behavior.

int rcu_read_lock_held(void)

might we be in RCU read-side critical section?

Parameters

void
no arguments

Description

If CONFIG_DEBUG_LOCK_ALLOC is selected, returns nonzero iff in an RCU read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, this assumes we are in an RCU read-side critical section unless it can prove otherwise. This is useful for debug checks in functions that require that they be called within an RCU read-side critical section.

Checks debug_lockdep_rcu_enabled() to prevent false positives during boot and while lockdep is disabled.

Note that rcu_read_lock() and the matching rcu_read_unlock() must occur in the same context, for example, it is illegal to invoke rcu_read_unlock() in process context if the matching rcu_read_lock() was invoked from within an irq handler.

Note that rcu_read_lock() is disallowed if the CPU is either idle or offline from an RCU perspective, so check for those as well.

int rcu_read_lock_bh_held(void)

might we be in RCU-bh read-side critical section?

Parameters

void
no arguments

Description

Check for bottom half being disabled, which covers both the CONFIG_PROVE_RCU and not cases. Note that if someone uses rcu_read_lock_bh(), but then later enables BH, lockdep (if enabled) will show the situation. This is useful for debug checks in functions that require that they be called within an RCU read-side critical section.

Check debug_lockdep_rcu_enabled() to prevent false positives during boot.

Note that rcu_read_lock() is disallowed if the CPU is either idle or offline from an RCU perspective, so check for those as well.

void wakeme_after_rcu(struct rcu_head * head)

Callback function to awaken a task after grace period

Parameters

struct rcu_head * head
Pointer to rcu_head member within rcu_synchronize structure

Description

Awaken the corresponding task now that a grace period has elapsed.

void init_rcu_head_on_stack(struct rcu_head * head)

initialize on-stack rcu_head for debugobjects

Parameters

struct rcu_head * head
pointer to rcu_head structure to be initialized

Description

This function informs debugobjects of a new rcu_head structure that has been allocated as an auto variable on the stack. This function is not required for rcu_head structures that are statically defined or that are dynamically allocated on the heap. This function has no effect for !CONFIG_DEBUG_OBJECTS_RCU_HEAD kernel builds.

void destroy_rcu_head_on_stack(struct rcu_head * head)

destroy on-stack rcu_head for debugobjects

Parameters

struct rcu_head * head
pointer to rcu_head structure to be initialized

Description

This function informs debugobjects that an on-stack rcu_head structure is about to go out of scope. As with init_rcu_head_on_stack(), this function is not required for rcu_head structures that are statically defined or that are dynamically allocated on the heap. Also as with init_rcu_head_on_stack(), this function has no effect for !CONFIG_DEBUG_OBJECTS_RCU_HEAD kernel builds.

void synchronize_rcu_tasks(void)

wait until an rcu-tasks grace period has elapsed.

Parameters

void
no arguments

Description

Control will return to the caller some time after a full rcu-tasks grace period has elapsed, in other words after all currently executing rcu-tasks read-side critical sections have elapsed. These read-side critical sections are delimited by calls to schedule(), cond_resched_rcu_qs(), idle execution, userspace execution, calls to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().

This is a very specialized primitive, intended only for a few uses in tracing and other situations requiring manipulation of function preambles and profiling hooks. The synchronize_rcu_tasks() function is not (yet) intended for heavy use from multiple CPUs.

Note that this guarantee implies further memory-ordering guarantees. On systems with more than one CPU, when synchronize_rcu_tasks() returns, each CPU is guaranteed to have executed a full memory barrier since the end of its last RCU-tasks read-side critical section whose beginning preceded the call to synchronize_rcu_tasks(). In addition, each CPU having an RCU-tasks read-side critical section that extends beyond the return from synchronize_rcu_tasks() is guaranteed to have executed a full memory barrier after the beginning of synchronize_rcu_tasks() and before the beginning of that RCU-tasks read-side critical section. Note that these guarantees include CPUs that are offline, idle, or executing in user mode, as well as CPUs that are executing in the kernel.

Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned to its caller on CPU B, then both CPU A and CPU B are guaranteed to have executed a full memory barrier during the execution of synchronize_rcu_tasks() – even if CPU A and CPU B are the same CPU (but again only if the system has more than one CPU).

void rcu_barrier_tasks(void)

Wait for in-flight call_rcu_tasks() callbacks.

Parameters

void
no arguments

Description

Although the current implementation is guaranteed to wait, it is not obligated to, for example, if there are no pending callbacks.

Device Resource Management

void * devres_alloc_node(dr_release_t release, size_t size, gfp_t gfp, int nid)

Allocate device resource data

Parameters

dr_release_t release
Release function devres will be associated with
size_t size
Allocation size
gfp_t gfp
Allocation flags
int nid
NUMA node

Description

Allocate devres of size bytes. The allocated area is zeroed, then associated with release. The returned pointer can be passed to other devres_*() functions.

Return

Pointer to allocated devres on success, NULL on failure.

void devres_for_each_res(struct device * dev, dr_release_t release, dr_match_t match, void * match_data, void (*fn) (struct device *, void *, void *, void * data)

Resource iterator

Parameters

struct device * dev
Device to iterate resource from
dr_release_t release
Look for resources associated with this release function
dr_match_t match
Match function (optional)
void * match_data
Data for the match function
void (*)(struct device *, void *, void *) fn
Function to be called for each matched resource.
void * data
Data for fn, the 3rd parameter of fn

Description

Call fn for each devres of dev which is associated with release and for which match returns 1.

Return

void
void devres_free(void * res)

Free device resource data

Parameters

void * res
Pointer to devres data to free

Description

Free devres created with devres_alloc().

void devres_add(struct device * dev, void * res)

Register device resource

Parameters

struct device * dev
Device to add resource to
void * res
Resource to register

Description

Register devres res to dev. res should have been allocated using devres_alloc(). On driver detach, the associated release function will be invoked and devres will be freed automatically.

void * devres_find(struct device * dev, dr_release_t release, dr_match_t match, void * match_data)

Find device resource

Parameters

struct device * dev
Device to lookup resource from
dr_release_t release
Look for resources associated with this release function
dr_match_t match
Match function (optional)
void * match_data
Data for the match function

Description

Find the latest devres of dev which is associated with release and for which match returns 1. If match is NULL, it’s considered to match all.

Return

Pointer to found devres, NULL if not found.

void * devres_get(struct device * dev, void * new_res, dr_match_t match, void * match_data)

Find devres, if non-existent, add one atomically

Parameters

struct device * dev
Device to lookup or add devres for
void * new_res
Pointer to new initialized devres to add if not found
dr_match_t match
Match function (optional)
void * match_data
Data for the match function

Description

Find the latest devres of dev which has the same release function as new_res and for which match return 1. If found, new_res is freed; otherwise, new_res is added atomically.

Return

Pointer to found or added devres.

void * devres_remove(struct device * dev, dr_release_t release, dr_match_t match, void * match_data)

Find a device resource and remove it

Parameters

struct device * dev
Device to find resource from
dr_release_t release
Look for resources associated with this release function
dr_match_t match
Match function (optional)
void * match_data
Data for the match function

Description

Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically and returned.

Return

Pointer to removed devres on success, NULL if not found.

int devres_destroy(struct device * dev, dr_release_t release, dr_match_t match, void * match_data)

Find a device resource and destroy it

Parameters

struct device * dev
Device to find resource from
dr_release_t release
Look for resources associated with this release function
dr_match_t match
Match function (optional)
void * match_data
Data for the match function

Description

Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically and freed.

Note that the release function for the resource will not be called, only the devres-allocated data will be freed. The caller becomes responsible for freeing any other data.

Return

0 if devres is found and freed, -ENOENT if not found.

int devres_release(struct device * dev, dr_release_t release, dr_match_t match, void * match_data)

Find a device resource and destroy it, calling release

Parameters

struct device * dev
Device to find resource from
dr_release_t release
Look for resources associated with this release function
dr_match_t match
Match function (optional)
void * match_data
Data for the match function

Description

Find the latest devres of dev associated with release and for which match returns 1. If match is NULL, it’s considered to match all. If found, the resource is removed atomically, the release function called and the resource freed.

Return

0 if devres is found and freed, -ENOENT if not found.

void * devres_open_group(struct device * dev, void * id, gfp_t gfp)

Open a new devres group

Parameters

struct device * dev
Device to open devres group for
void * id
Separator ID
gfp_t gfp
Allocation flags

Description

Open a new devres group for dev with id. For id, using a pointer to an object which won’t be used for another group is recommended. If id is NULL, address-wise unique ID is created.

Return

ID of the new group, NULL on failure.

void devres_close_group(struct device * dev, void * id)

Close a devres group

Parameters

struct device * dev
Device to close devres group for
void * id
ID of target group, can be NULL

Description

Close the group identified by id. If id is NULL, the latest open group is selected.

void devres_remove_group(struct device * dev, void * id)

Remove a devres group

Parameters

struct device * dev
Device to remove group for
void * id
ID of target group, can be NULL

Description

Remove the group identified by id. If id is NULL, the latest open group is selected. Note that removing a group doesn’t affect any other resources.

int devres_release_group(struct device * dev, void * id)

Release resources in a devres group

Parameters

struct device * dev
Device to release group for
void * id
ID of target group, can be NULL

Description

Release all resources in the group identified by id. If id is NULL, the latest open group is selected. The selected group and groups properly nested inside the selected group are removed.

Return

The number of released non-group resources.

int devm_add_action(struct device * dev, void (*action) (void *, void * data)

add a custom action to list of managed resources

Parameters

struct device * dev
Device that owns the action
void (*)(void *) action
Function that should be called
void * data
Pointer to data passed to action implementation

Description

This adds a custom action to the list of managed resources so that it gets executed as part of standard resource unwinding.

void devm_remove_action(struct device * dev, void (*action) (void *, void * data)

removes previously added custom action

Parameters

struct device * dev
Device that owns the action
void (*)(void *) action
Function implementing the action
void * data
Pointer to data passed to action implementation

Description

Removes instance of action previously added by devm_add_action(). Both action and data should match one of the existing entries.

void * devm_kmalloc(struct device * dev, size_t size, gfp_t gfp)

Resource-managed kmalloc

Parameters

struct device * dev
Device to allocate memory for
size_t size
Allocation size
gfp_t gfp
Allocation gfp flags

Description

Managed kmalloc. Memory allocated with this function is automatically freed on driver detach. Like all other devres resources, guaranteed alignment is unsigned long long.

Return

Pointer to allocated memory on success, NULL on failure.

char * devm_kstrdup(struct device * dev, const char * s, gfp_t gfp)

Allocate resource managed space and copy an existing string into that.

Parameters

struct device * dev
Device to allocate memory for
const char * s
the string to duplicate
gfp_t gfp
the GFP mask used in the devm_kmalloc() call when allocating memory

Return

Pointer to allocated string on success, NULL on failure.

char * devm_kvasprintf(struct device * dev, gfp_t gfp, const char * fmt, va_list ap)

Allocate resource managed space and format a string into that.

Parameters

struct device * dev
Device to allocate memory for
gfp_t gfp
the GFP mask used in the devm_kmalloc() call when allocating memory
const char * fmt
The printf()-style format string
va_list ap
Arguments for the format string

Return

Pointer to allocated string on success, NULL on failure.

char * devm_kasprintf(struct device * dev, gfp_t gfp, const char * fmt, ...)

Allocate resource managed space and format a string into that.

Parameters

struct device * dev
Device to allocate memory for
gfp_t gfp
the GFP mask used in the devm_kmalloc() call when allocating memory
const char * fmt
The printf()-style format string @...: Arguments for the format string
...
variable arguments

Return

Pointer to allocated string on success, NULL on failure.

void devm_kfree(struct device * dev, void * p)

Resource-managed kfree

Parameters

struct device * dev
Device this memory belongs to
void * p
Memory to free

Description

Free memory allocated with devm_kmalloc().

void * devm_kmemdup(struct device * dev, const void * src, size_t len, gfp_t gfp)

Resource-managed kmemdup

Parameters

struct device * dev
Device this memory belongs to
const void * src
Memory region to duplicate
size_t len
Memory region length
gfp_t gfp
GFP mask to use

Description

Duplicate region of a memory using resource managed kmalloc

unsigned long devm_get_free_pages(struct device * dev, gfp_t gfp_mask, unsigned int order)

Resource-managed __get_free_pages

Parameters

struct device * dev
Device to allocate memory for
gfp_t gfp_mask
Allocation gfp flags
unsigned int order
Allocation size is (1 << order) pages

Description

Managed get_free_pages. Memory allocated with this function is automatically freed on driver detach.

Return

Address of allocated memory on success, 0 on failure.

void devm_free_pages(struct device * dev, unsigned long addr)

Resource-managed free_pages

Parameters

struct device * dev
Device this memory belongs to
unsigned long addr
Memory to free

Description

Free memory allocated with devm_get_free_pages(). Unlike free_pages, there is no need to supply the order.

Device drivers infrastructure

The Basic Device Driver-Model Structures

struct bus_type

The bus type of the device

Definition

struct bus_type {
  const char * name;
  const char * dev_name;
  struct device * dev_root;
  struct device_attribute * dev_attrs;
  const struct attribute_group ** bus_groups;
  const struct attribute_group ** dev_groups;
  const struct attribute_group ** drv_groups;
  int (* match) (struct device *dev, struct device_driver *drv);
  int (* uevent) (struct device *dev, struct kobj_uevent_env *env);
  int (* probe) (struct device *dev);
  int (* remove) (struct device *dev);
  void (* shutdown) (struct device *dev);
  int (* online) (struct device *dev);
  int (* offline) (struct device *dev);
  int (* suspend) (struct device *dev, pm_message_t state);
  int (* resume) (struct device *dev);
  const struct dev_pm_ops * pm;
  const struct iommu_ops * iommu_ops;
  struct subsys_private * p;
  struct lock_class_key lock_key;
};

Members

const char * name
The name of the bus.
const char * dev_name
Used for subsystems to enumerate devices like (“foo``u``”, dev->id).
struct device * dev_root
Default device to use as the parent.
struct device_attribute * dev_attrs
Default attributes of the devices on the bus.
const struct attribute_group ** bus_groups
Default attributes of the bus.
const struct attribute_group ** dev_groups
Default attributes of the devices on the bus.
const struct attribute_group ** drv_groups
Default attributes of the device drivers on the bus.
int (*)(struct device *dev, struct device_driver *drv) match
Called, perhaps multiple times, whenever a new device or driver is added for this bus. It should return a positive value if the given device can be handled by the given driver and zero otherwise. It may also return error code if determining that the driver supports the device is not possible. In case of -EPROBE_DEFER it will queue the device for deferred probing.
int (*)(struct device *dev, struct kobj_uevent_env *env) uevent
Called when a device is added, removed, or a few other things that generate uevents to add the environment variables.
int (*)(struct device *dev) probe
Called when a new device or driver add to this bus, and callback the specific driver’s probe to initial the matched device.
int (*)(struct device *dev) remove
Called when a device removed from this bus.
void (*)(struct device *dev) shutdown
Called at shut-down time to quiesce the device.
int (*)(struct device *dev) online
Called to put the device back online (after offlining it).
int (*)(struct device *dev) offline
Called to put the device offline for hot-removal. May fail.
int (*)(struct device *dev, pm_message_t state) suspend
Called when a device on this bus wants to go to sleep mode.
int (*)(struct device *dev) resume
Called to bring a device on this bus out of sleep mode.
const struct dev_pm_ops * pm
Power management operations of this bus, callback the specific device driver’s pm-ops.
const struct iommu_ops * iommu_ops
IOMMU specific operations for this bus, used to attach IOMMU driver implementations to a bus and allow the driver to do bus-specific setup
struct subsys_private * p
The private data of the driver core, only the driver core can touch this.
struct lock_class_key lock_key
Lock class key for use by the lock validator

Description

A bus is a channel between the processor and one or more devices. For the purposes of the device model, all devices are connected via a bus, even if it is an internal, virtual, “platform” bus. Buses can plug into each other. A USB controller is usually a PCI device, for example. The device model represents the actual connections between buses and the devices they control. A bus is represented by the bus_type structure. It contains the name, the default attributes, the bus’ methods, PM operations, and the driver core’s private data.

enum probe_type

device driver probe type to try Device drivers may opt in for special handling of their respective probe routines. This tells the core what to expect and prefer.

Constants

PROBE_DEFAULT_STRATEGY
Used by drivers that work equally well whether probed synchronously or asynchronously.
PROBE_PREFER_ASYNCHRONOUS
Drivers for “slow” devices which probing order is not essential for booting the system may opt into executing their probes asynchronously.
PROBE_FORCE_SYNCHRONOUS
Use this to annotate drivers that need their probe routines to run synchronously with driver and device registration (with the exception of -EPROBE_DEFER handling - re-probing always ends up being done asynchronously).

Description

Note that the end goal is to switch the kernel to use asynchronous probing by default, so annotating drivers with PROBE_PREFER_ASYNCHRONOUS is a temporary measure that allows us to speed up boot process while we are validating the rest of the drivers.

struct device_driver

The basic device driver structure

Definition

struct device_driver {
  const char * name;
  struct bus_type * bus;
  struct module * owner;
  const char * mod_name;
  bool suppress_bind_attrs;
  enum probe_type probe_type;
  const struct of_device_id * of_match_table;
  const struct acpi_device_id * acpi_match_table;
  int (* probe) (struct device *dev);
  int (* remove) (struct device *dev);
  void (* shutdown) (struct device *dev);
  int (* suspend) (struct device *dev, pm_message_t state);
  int (* resume) (struct device *dev);
  const struct attribute_group ** groups;
  const struct dev_pm_ops * pm;
  struct driver_private * p;
};

Members

const char * name
Name of the device driver.
struct bus_type * bus
The bus which the device of this driver belongs to.
struct module * owner
The module owner.
const char * mod_name
Used for built-in modules.
bool suppress_bind_attrs
Disables bind/unbind via sysfs.
enum probe_type probe_type
Type of the probe (synchronous or asynchronous) to use.
const struct of_device_id * of_match_table
The open firmware table.
const struct acpi_device_id * acpi_match_table
The ACPI match table.
int (*) (struct device *dev) probe
Called to query the existence of a specific device, whether this driver can work with it, and bind the driver to a specific device.
int (*) (struct device *dev) remove
Called when the device is removed from the system to unbind a device from this driver.
void (*) (struct device *dev) shutdown
Called at shut-down time to quiesce the device.
int (*) (struct device *dev, pm_message_t state) suspend
Called to put the device to sleep mode. Usually to a low power state.
int (*) (struct device *dev) resume
Called to bring a device from sleep mode.
const struct attribute_group ** groups
Default attributes that get created by the driver core automatically.
const struct dev_pm_ops * pm
Power management operations of the device which matched this driver.
struct driver_private * p
Driver core’s private data, no one other than the driver core can touch this.

Description

The device driver-model tracks all of the drivers known to the system. The main reason for this tracking is to enable the driver core to match up drivers with new devices. Once drivers are known objects within the system, however, a number of other things become possible. Device drivers can export information and configuration variables that are independent of any specific device.

struct subsys_interface

interfaces to device functions

Definition

struct subsys_interface {
  const char * name;
  struct bus_type * subsys;
  struct list_head node;
  int (* add_dev) (struct device *dev, struct subsys_interface *sif);
  void (* remove_dev) (struct device *dev, struct subsys_interface *sif);
};

Members

const char * name
name of the device function
struct bus_type * subsys
subsytem of the devices to attach to
struct list_head node
the list of functions registered at the subsystem
int (*)(struct device *dev, struct subsys_interface *sif) add_dev
device hookup to device function handler
void (*)(struct device *dev, struct subsys_interface *sif) remove_dev
device hookup to device function handler

Description

Simple interfaces attached to a subsystem. Multiple interfaces can attach to a subsystem and its devices. Unlike drivers, they do not exclusively claim or control devices. Interfaces usually represent a specific functionality of a subsystem/class of devices.

struct class

device classes

Definition

struct class {
  const char * name;
  struct module * owner;
  struct class_attribute * class_attrs;
  const struct attribute_group ** dev_groups;
  struct kobject * dev_kobj;
  int (* dev_uevent) (struct device *dev, struct kobj_uevent_env *env);
  char *(* devnode) (struct device *dev, umode_t *mode);
  void (* class_release) (struct class *class);
  void (* dev_release) (struct device *dev);
  int (* suspend) (struct device *dev, pm_message_t state);
  int (* resume) (struct device *dev);
  const struct kobj_ns_type_operations * ns_type;
  const void *(* namespace) (struct device *dev);
  const struct dev_pm_ops * pm;
  struct subsys_private * p;
};

Members

const char * name
Name of the class.
struct module * owner
The module owner.
struct class_attribute * class_attrs
Default attributes of this class.
const struct attribute_group ** dev_groups
Default attributes of the devices that belong to the class.
struct kobject * dev_kobj
The kobject that represents this class and links it into the hierarchy.
int (*)(struct device *dev, struct kobj_uevent_env *env) dev_uevent
Called when a device is added, removed from this class, or a few other things that generate uevents to add the environment variables.
char *(*)(struct device *dev, umode_t *mode) devnode
Callback to provide the devtmpfs.
void (*)(struct class *class) class_release
Called to release this class.
void (*)(struct device *dev) dev_release
Called to release the device.
int (*)(struct device *dev, pm_message_t state) suspend
Used to put the device to sleep mode, usually to a low power state.
int (*)(struct device *dev) resume
Used to bring the device from the sleep mode.
const struct kobj_ns_type_operations * ns_type
Callbacks so sysfs can detemine namespaces.
const void *(*)(struct device *dev) namespace
Namespace of the device belongs to this class.
const struct dev_pm_ops * pm
The default device power management operations of this class.
struct subsys_private * p
The private data of the driver core, no one other than the driver core can touch this.

Description

A class is a higher-level view of a device that abstracts out low-level implementation details. Drivers may see a SCSI disk or an ATA disk, but, at the class level, they are all simply disks. Classes allow user space to work with devices based on what they do, rather than how they are connected or how they work.

struct device

The basic device structure

Definition

struct device {
  struct device * parent;
  struct device_private * p;
  struct kobject kobj;
  const char * init_name;
  const struct device_type * type;
  struct mutex mutex;
  struct bus_type * bus;
  struct device_driver * driver;
  void * platform_data;
  void * driver_data;
  struct dev_pm_info power;
  struct dev_pm_domain * pm_domain;
#ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN
  struct irq_domain * msi_domain;
#endif
#ifdef CONFIG_PINCTRL
  struct dev_pin_info * pins;
#endif
#ifdef CONFIG_GENERIC_MSI_IRQ
  struct list_head msi_list;
#endif
#ifdef CONFIG_NUMA
  int numa_node;
#endif
  u64 * dma_mask;
  u64 coherent_dma_mask;
  unsigned long dma_pfn_offset;
  struct device_dma_parameters * dma_parms;
  struct list_head dma_pools;
  struct dma_coherent_mem * dma_mem;
#ifdef CONFIG_DMA_CMA
  struct cma * cma_area;
#endif
  struct dev_archdata archdata;
  struct device_node * of_node;
  struct fwnode_handle * fwnode;
  dev_t devt;
  u32 id;
  spinlock_t devres_lock;
  struct list_head devres_head;
  struct klist_node knode_class;
  struct class * class;
  const struct attribute_group ** groups;
  void (* release) (struct device *dev);
  struct iommu_group * iommu_group;
  bool offline_disabled:1;
  bool offline:1;
};

Members

struct device * parent
The device’s “parent” device, the device to which it is attached. In most cases, a parent device is some sort of bus or host controller. If parent is NULL, the device, is a top-level device, which is not usually what you want.
struct device_private * p
Holds the private data of the driver core portions of the device. See the comment of the struct device_private for detail.
struct kobject kobj
A top-level, abstract class from which other classes are derived.
const char * init_name
Initial name of the device.
const struct device_type * type
The type of device. This identifies the device type and carries type-specific information.
struct mutex mutex
Mutex to synchronize calls to its driver.
struct bus_type * bus
Type of bus device is on.
struct device_driver * driver
Which driver has allocated this
void * platform_data
Platform data specific to the device.
void * driver_data
Private pointer for driver specific info.
struct dev_pm_info power
For device power management. See Documentation/power/devices.txt for details.
struct dev_pm_domain * pm_domain
Provide callbacks that are executed during system suspend, hibernation, system resume and during runtime PM transitions along with subsystem-level and driver-level callbacks.
struct irq_domain * msi_domain
The generic MSI domain this device is using.
struct dev_pin_info * pins
For device pin management. See Documentation/pinctrl.txt for details.
struct list_head msi_list
Hosts MSI descriptors
int numa_node
NUMA node this device is close to.
u64 * dma_mask
Dma mask (if dma’ble device).
u64 coherent_dma_mask
Like dma_mask, but for alloc_coherent mapping as not all hardware supports 64-bit addresses for consistent allocations such descriptors.
unsigned long dma_pfn_offset
offset of DMA memory range relatively of RAM
struct device_dma_parameters * dma_parms
A low level driver may set these to teach IOMMU code about segment limitations.
struct list_head dma_pools
Dma pools (if dma’ble device).
struct dma_coherent_mem * dma_mem
Internal for coherent mem override.
struct cma * cma_area
Contiguous memory area for dma allocations
struct dev_archdata archdata
For arch-specific additions.
struct device_node * of_node
Associated device tree node.
struct fwnode_handle * fwnode
Associated device node supplied by platform firmware.
dev_t devt
For creating the sysfs “dev”.
u32 id
device instance
spinlock_t devres_lock
Spinlock to protect the resource of the device.
struct list_head devres_head
The resources list of the device.
struct klist_node knode_class
The node used to add the device to the class list.
struct class * class
The class of the device.
const struct attribute_group ** groups
Optional attribute groups.
void (*)(struct device *dev) release
Callback to free the device after all references have gone away. This should be set by the allocator of the device (i.e. the bus driver that discovered the device).
struct iommu_group * iommu_group
IOMMU group the device belongs to.
bool:1 offline_disabled
If set, the device is permanently online.
bool:1 offline
Set after successful invocation of bus type’s .:c:func:offline().

Example

For devices on custom boards, as typical of embedded
and SOC based hardware, Linux often uses platform_data to point to board-specific structures describing devices and how they are wired. That can include what ports are available, chip variants, which GPIO pins act in what additional roles, and so on. This shrinks the “Board Support Packages” (BSPs) and minimizes board-specific #ifdefs in drivers.

Description

At the lowest level, every device in a Linux system is represented by an instance of struct device. The device structure contains the information that the device model core needs to model the system. Most subsystems, however, track additional information about the devices they host. As a result, it is rare for devices to be represented by bare device structures; instead, that structure, like kobject structures, is usually embedded within a higher-level representation of the device.

module_driver(__driver, __register, __unregister, ...)

Helper macro for drivers that don’t do anything special in module init/exit. This eliminates a lot of boilerplate. Each module may only use this macro once, and calling it replaces module_init() and module_exit().

Parameters

__driver
driver name
__register
register function for this driver type
__unregister
unregister function for this driver type @...: Additional arguments to be passed to __register and __unregister.
...
variable arguments

Description

Use this macro to construct bus specific macros for registering drivers, and do not use it on its own.

builtin_driver(__driver, __register, ...)

Helper macro for drivers that don’t do anything special in init and have no exit. This eliminates some boilerplate. Each driver may only use this macro once, and calling it replaces device_initcall (or in some cases, the legacy __initcall). This is meant to be a direct parallel of module_driver() above but without the __exit stuff that is not used for builtin cases.

Parameters

__driver
driver name
__register
register function for this driver type @...: Additional arguments to be passed to __register
...
variable arguments

Description

Use this macro to construct bus specific macros for registering drivers, and do not use it on its own.

Device Drivers Base

void driver_init(void)

initialize driver model.

Parameters

void
no arguments

Description

Call the driver model init functions to initialize their subsystems. Called early from init/main.c.

int driver_for_each_device(struct device_driver * drv, struct device * start, void * data, int (*fn) (struct device *, void *)

Iterator for devices bound to a driver.

Parameters

struct device_driver * drv
Driver we’re iterating.
struct device * start
Device to begin with
void * data
Data to pass to the callback.
int (*)(struct device *, void *) fn
Function to call for each device.

Description

Iterate over the drv‘s list of devices calling fn for each one.

struct device * driver_find_device(struct device_driver * drv, struct device * start, void * data, int (*match) (struct device *dev, void *data)

device iterator for locating a particular device.

Parameters

struct device_driver * drv
The device’s driver
struct device * start
Device to begin with
void * data
Data to pass to match function
int (*)(struct device *dev, void *data) match
Callback function to check device

Description

This is similar to the driver_for_each_device() function above, but it returns a reference to a device that is ‘found’ for later use, as determined by the match callback.

The callback should return 0 if the device doesn’t match and non-zero if it does. If the callback returns non-zero, this function will return to the caller and not iterate over any more devices.

int driver_create_file(struct device_driver * drv, const struct driver_attribute * attr)

create sysfs file for driver.

Parameters

struct device_driver * drv
driver.
const struct driver_attribute * attr
driver attribute descriptor.
void driver_remove_file(struct device_driver * drv, const struct driver_attribute * attr)

remove sysfs file for driver.

Parameters

struct device_driver * drv
driver.
const struct driver_attribute * attr
driver attribute descriptor.
int driver_register(struct device_driver * drv)

register driver with bus

Parameters

struct device_driver * drv
driver to register

Description

We pass off most of the work to the bus_add_driver() call, since most of the things we have to do deal with the bus structures.

void driver_unregister(struct device_driver * drv)

remove driver from system.

Parameters

struct device_driver * drv
driver.

Description

Again, we pass off most of the work to the bus-level call.

struct device_driver * driver_find(const char * name, struct bus_type * bus)

locate driver on a bus by its name.

Parameters

const char * name
name of the driver.
struct bus_type * bus
bus to scan for the driver.

Description

Call kset_find_obj() to iterate over list of drivers on a bus to find driver by name. Return driver if found.

This routine provides no locking to prevent the driver it returns from being unregistered or unloaded while the caller is using it. The caller is responsible for preventing this.

const char * dev_driver_string(const struct device * dev)

Return a device’s driver name, if at all possible

Parameters

const struct device * dev
struct device to get the name of

Description

Will return the device’s driver’s name if it is bound to a device. If the device is not bound to a driver, it will return the name of the bus it is attached to. If it is not attached to a bus either, an empty string will be returned.

int device_create_file(struct device * dev, const struct device_attribute * attr)

create sysfs attribute file for device.

Parameters

struct device * dev
device.
const struct device_attribute * attr
device attribute descriptor.
void device_remove_file(struct device * dev, const struct device_attribute * attr)

remove sysfs attribute file.

Parameters

struct device * dev
device.
const struct device_attribute * attr
device attribute descriptor.
bool device_remove_file_self(struct device * dev, const struct device_attribute * attr)

remove sysfs attribute file from its own method.

Parameters

struct device * dev
device.
const struct device_attribute * attr
device attribute descriptor.

Description

See kernfs_remove_self() for details.

int device_create_bin_file(struct device * dev, const struct bin_attribute * attr)

create sysfs binary attribute file for device.

Parameters

struct device * dev
device.
const struct bin_attribute * attr
device binary attribute descriptor.
void device_remove_bin_file(struct device * dev, const struct bin_attribute * attr)

remove sysfs binary attribute file

Parameters

struct device * dev
device.
const struct bin_attribute * attr
device binary attribute descriptor.
void device_initialize(struct device * dev)

init device structure.

Parameters

struct device * dev
device.

Description

This prepares the device for use by other layers by initializing its fields. It is the first half of device_register(), if called by that function, though it can also be called separately, so one may use dev‘s fields. In particular, get_device()/put_device() may be used for reference counting of dev after calling this function.

All fields in dev must be initialized by the caller to 0, except for those explicitly set to some other value. The simplest approach is to use kzalloc() to allocate the structure containing dev.

NOTE

Use put_device() to give up your reference instead of freeing dev directly once you have called this function.

int dev_set_name(struct device * dev, const char * fmt, ...)

set a device name

Parameters

struct device * dev
device
const char * fmt
format string for the device’s name
...
variable arguments
int device_add(struct device * dev)

add device to device hierarchy.

Parameters

struct device * dev
device.

Description

This is part 2 of device_register(), though may be called separately _iff_ device_initialize() has been called separately.

This adds dev to the kobject hierarchy via kobject_add(), adds it to the global and sibling lists for the device, then adds it to the other relevant subsystems of the driver model.

Do not call this routine or device_register() more than once for any device structure. The driver model core is not designed to work with devices that get unregistered and then spring back to life. (Among other things, it’s very hard to guarantee that all references to the previous incarnation of dev have been dropped.) Allocate and register a fresh new struct device instead.

NOTE

_Never_ directly free dev after calling this function, even if it returned an error! Always use put_device() to give up your reference instead.

int device_register(struct device * dev)

register a device with the system.

Parameters

struct device * dev
pointer to the device structure

Description

This happens in two clean steps - initialize the device and add it to the system. The two steps can be called separately, but this is the easiest and most common. I.e. you should only call the two helpers separately if have a clearly defined need to use and refcount the device before it is added to the hierarchy.

For more information, see the kerneldoc for device_initialize() and device_add().

NOTE

_Never_ directly free dev after calling this function, even if it returned an error! Always use put_device() to give up the reference initialized in this function instead.

struct device * get_device(struct device * dev)

increment reference count for device.

Parameters

struct device * dev
device.

Description

This simply forwards the call to kobject_get(), though we do take care to provide for the case that we get a NULL pointer passed in.

void put_device(struct device * dev)

decrement reference count.

Parameters

struct device * dev
device in question.
void device_del(struct device * dev)

delete device from system.

Parameters

struct device * dev
device.

Description

This is the first part of the device unregistration sequence. This removes the device from the lists we control from here, has it removed from the other driver model subsystems it was added to in device_add(), and removes it from the kobject hierarchy.

NOTE

this should be called manually _iff_ device_add() was also called manually.

void device_unregister(struct device * dev)

unregister device from system.

Parameters

struct device * dev
device going away.

Description

We do this in two parts, like we do device_register(). First, we remove it from all the subsystems with device_del(), then we decrement the reference count via put_device(). If that is the final reference count, the device will be cleaned up via device_release() above. Otherwise, the structure will stick around until the final reference to the device is dropped.

int device_for_each_child(struct device * parent, void * data, int (*fn) (struct device *dev, void *data)

device child iterator.

Parameters

struct device * parent
parent struct device.
void * data
data for the callback.
int (*)(struct device *dev, void *data) fn
function to be called for each device.

Description

Iterate over parent‘s child devices, and call fn for each, passing it data.

We check the return of fn each time. If it returns anything other than 0, we break out and return that value.

int device_for_each_child_reverse(struct device * parent, void * data, int (*fn) (struct device *dev, void *data)

device child iterator in reversed order.

Parameters

struct device * parent
parent struct device.
void * data
data for the callback.
int (*)(struct device *dev, void *data) fn
function to be called for each device.

Description

Iterate over parent‘s child devices, and call fn for each, passing it data.

We check the return of fn each time. If it returns anything other than 0, we break out and return that value.

struct device * device_find_child(struct device * parent, void * data, int (*match) (struct device *dev, void *data)

device iterator for locating a particular device.

Parameters

struct device * parent
parent struct device
void * data
Data to pass to match function
int (*)(struct device *dev, void *data) match
Callback function to check device

Description

This is similar to the device_for_each_child() function above, but it returns a reference to a device that is ‘found’ for later use, as determined by the match callback.

The callback should return 0 if the device doesn’t match and non-zero if it does. If the callback returns non-zero and a reference to the current device can be obtained, this function will return to the caller and not iterate over any more devices.

NOTE

you will need to drop the reference with put_device() after use.

struct device * __root_device_register(const char * name, struct module * owner)

allocate and register a root device

Parameters

const char * name
root device name
struct module * owner
owner module of the root device, usually THIS_MODULE

Description

This function allocates a root device and registers it using device_register(). In order to free the returned device, use root_device_unregister().

Root devices are dummy devices which allow other devices to be grouped under /sys/devices. Use this function to allocate a root device and then use it as the parent of any device which should appear under /sys/devices/{name}

The /sys/devices/{name} directory will also contain a ‘module’ symlink which points to the owner directory in sysfs.

Returns struct device pointer on success, or ERR_PTR() on error.

Note

You probably want to use root_device_register().

void root_device_unregister(struct device * dev)

unregister and free a root device

Parameters

struct device * dev
device going away

Description

This function unregisters and cleans up a device that was created by root_device_register().

struct device * device_create_vargs(struct class * class, struct device * parent, dev_t devt, void * drvdata, const char * fmt, va_list args)

creates a device and registers it with sysfs

Parameters

struct class * class
pointer to the struct class that this device should be registered to
struct device * parent
pointer to the parent struct device of this new device, if any
dev_t devt
the dev_t for the char device to be added
void * drvdata
the data to be added to the device for callbacks
const char * fmt
string for the device’s name
va_list args
va_list for the device’s name

Description

This function can be used by char device classes. A struct device will be created in sysfs, registered to the specified class.

A “dev” file will be created, showing the dev_t for the device, if the dev_t is not 0,0. If a pointer to a parent struct device is passed in, the newly created struct device will be a child of that device in sysfs. The pointer to the struct device will be returned from the call. Any further sysfs files that might be required can be created using this pointer.

Returns struct device pointer on success, or ERR_PTR() on error.

Note

the struct class passed to this function must have previously been created with a call to class_create().

struct device * device_create(struct class * class, struct device * parent, dev_t devt, void * drvdata, const char * fmt, ...)

creates a device and registers it with sysfs

Parameters

struct class * class
pointer to the struct class that this device should be registered to
struct device * parent
pointer to the parent struct device of this new device, if any
dev_t devt
the dev_t for the char device to be added
void * drvdata
the data to be added to the device for callbacks
const char * fmt
string for the device’s name
...
variable arguments

Description

This function can be used by char device classes. A struct device will be created in sysfs, registered to the specified class.

A “dev” file will be created, showing the dev_t for the device, if the dev_t is not 0,0. If a pointer to a parent struct device is passed in, the newly created struct device will be a child of that device in sysfs. The pointer to the struct device will be returned from the call. Any further sysfs files that might be required can be created using this pointer.

Returns struct device pointer on success, or ERR_PTR() on error.

Note

the struct class passed to this function must have previously been created with a call to class_create().

struct device * device_create_with_groups(struct class * class, struct device * parent, dev_t devt, void * drvdata, const struct attribute_group ** groups, const char * fmt, ...)

creates a device and registers it with sysfs

Parameters

struct class * class
pointer to the struct class that this device should be registered to
struct device * parent
pointer to the parent struct device of this new device, if any
dev_t devt
the dev_t for the char device to be added
void * drvdata
the data to be added to the device for callbacks
const struct attribute_group ** groups
NULL-terminated list of attribute groups to be created
const char * fmt
string for the device’s name
...
variable arguments

Description

This function can be used by char device classes. A struct device will be created in sysfs, registered to the specified class. Additional attributes specified in the groups parameter will also be created automatically.

A “dev” file will be created, showing the dev_t for the device, if the dev_t is not 0,0. If a pointer to a parent struct device is passed in, the newly created struct device will be a child of that device in sysfs. The pointer to the struct device will be returned from the call. Any further sysfs files that might be required can be created using this pointer.

Returns struct device pointer on success, or ERR_PTR() on error.

Note

the struct class passed to this function must have previously been created with a call to class_create().

void device_destroy(struct class * class, dev_t devt)

removes a device that was created with device_create()

Parameters

struct class * class
pointer to the struct class that this device was registered with
dev_t devt
the dev_t of the device that was previously registered

Description

This call unregisters and cleans up a device that was created with a call to device_create().

int device_rename(struct device * dev, const char * new_name)

renames a device

Parameters

struct device * dev
the pointer to the struct device to be renamed
const char * new_name
the new name of the device

Description

It is the responsibility of the caller to provide mutual exclusion between two different calls of device_rename on the same device to ensure that new_name is valid and won’t conflict with other devices.

Note

Don’t call this function. Currently, the networking layer calls this function, but that will change. The following text from Kay Sievers offers some insight:

Renaming devices is racy at many levels, symlinks and other stuff are not replaced atomically, and you get a “move” uevent, but it’s not easy to connect the event to the old and new device. Device nodes are not renamed at all, there isn’t even support for that in the kernel now.

In the meantime, during renaming, your target name might be taken by another driver, creating conflicts. Or the old name is taken directly after you renamed it – then you get events for the same DEVPATH, before you even see the “move” event. It’s just a mess, and nothing new should ever rely on kernel device renaming. Besides that, it’s not even implemented now for other things than (driver-core wise very simple) network devices.

We are currently about to change network renaming in udev to completely disallow renaming of devices in the same namespace as the kernel uses, because we can’t solve the problems properly, that arise with swapping names of multiple interfaces without races. Means, renaming of eth[0-9]* will only be allowed to some other name than eth[0-9]*, for the aforementioned reasons.

Make up a “real” name in the driver before you register anything, or add some other attributes for userspace to find the device, or use udev to add symlinks – but never rename kernel devices later, it’s a complete mess. We don’t even want to get into that and try to implement the missing pieces in the core. We really have other pieces to fix in the driver core mess. :)

int device_move(struct device * dev, struct device * new_parent, enum dpm_order dpm_order)

moves a device to a new parent

Parameters

struct device * dev
the pointer to the struct device to be moved
struct device * new_parent
the new parent of the device (can by NULL)
enum dpm_order dpm_order
how to reorder the dpm_list
void set_primary_fwnode(struct device * dev, struct fwnode_handle * fwnode)

Change the primary firmware node of a given device.

Parameters

struct device * dev
Device to handle.
struct fwnode_handle * fwnode
New primary firmware node of the device.

Description

Set the device’s firmware node pointer to fwnode, but if a secondary firmware node of the device is present, preserve it.

void register_syscore_ops(struct syscore_ops * ops)

Register a set of system core operations.

Parameters

struct syscore_ops * ops
System core operations to register.
void unregister_syscore_ops(struct syscore_ops * ops)

Unregister a set of system core operations.

Parameters

struct syscore_ops * ops
System core operations to unregister.
int syscore_suspend(void)

Execute all the registered system core suspend callbacks.

Parameters

void
no arguments

Description

This function is executed with one CPU on-line and disabled interrupts.

void syscore_resume(void)

Execute all the registered system core resume callbacks.

Parameters

void
no arguments

Description

This function is executed with one CPU on-line and disabled interrupts.

struct class * __class_create(struct module * owner, const char * name, struct lock_class_key * key)

create a struct class structure

Parameters

struct module * owner
pointer to the module that is to “own” this struct class
const char * name
pointer to a string for the name of this class.
struct lock_class_key * key
the lock_class_key for this class; used by mutex lock debugging

Description

This is used to create a struct class pointer that can then be used in calls to device_create().

Returns struct class pointer on success, or ERR_PTR() on error.

Note, the pointer created here is to be destroyed when finished by making a call to class_destroy().

void class_destroy(struct class * cls)

destroys a struct class structure

Parameters

struct class * cls
pointer to the struct class that is to be destroyed

Description

Note, the pointer to be destroyed must have been created with a call to class_create().

void class_dev_iter_init(struct class_dev_iter * iter, struct class * class, struct device * start, const struct device_type * type)

initialize class device iterator

Parameters

struct class_dev_iter * iter
class iterator to initialize
struct class * class
the class we wanna iterate over
struct device * start
the device to start iterating from, if any
const struct device_type * type
device_type of the devices to iterate over, NULL for all

Description

Initialize class iterator iter such that it iterates over devices of class. If start is set, the list iteration will start there, otherwise if it is NULL, the iteration starts at the beginning of the list.

struct device * class_dev_iter_next(struct class_dev_iter * iter)

iterate to the next device

Parameters

struct class_dev_iter * iter
class iterator to proceed

Description

Proceed iter to the next device and return it. Returns NULL if iteration is complete.

The returned device is referenced and won’t be released till iterator is proceed to the next device or exited. The caller is free to do whatever it wants to do with the device including calling back into class code.

void class_dev_iter_exit(struct class_dev_iter * iter)

finish iteration

Parameters

struct class_dev_iter * iter
class iterator to finish

Description

Finish an iteration. Always call this function after iteration is complete whether the iteration ran till the end or not.

int class_for_each_device(struct class * class, struct device * start, void * data, int (*fn) (struct device *, void *)

device iterator

Parameters

struct class * class
the class we’re iterating
struct device * start
the device to start with in the list, if any.
void * data
data for the callback
int (*)(struct device *, void *) fn
function to be called for each device

Description

Iterate over class‘s list of devices, and call fn for each, passing it data. If start is set, the list iteration will start there, otherwise if it is NULL, the iteration starts at the beginning of the list.

We check the return of fn each time. If it returns anything other than 0, we break out and return that value.

fn is allowed to do anything including calling back into class code. There’s no locking restriction.

struct device * class_find_device(struct class * class, struct device * start, const void * data, int (*match) (struct device *, const void *)

device iterator for locating a particular device

Parameters

struct class * class
the class we’re iterating
struct device * start
Device to begin with
const void * data
data for the match function
int (*)(struct device *, const void *) match
function to check device

Description

This is similar to the class_for_each_dev() function above, but it returns a reference to a device that is ‘found’ for later use, as determined by the match callback.

The callback should return 0 if the device doesn’t match and non-zero if it does. If the callback returns non-zero, this function will return to the caller and not iterate over any more devices.

Note, you will need to drop the reference with put_device() after use.

match is allowed to do anything including calling back into class code. There’s no locking restriction.

struct class_compat * class_compat_register(const char * name)

register a compatibility class

Parameters

const char * name
the name of the class

Description

Compatibility class are meant as a temporary user-space compatibility workaround when converting a family of class devices to a bus devices.

void class_compat_unregister(struct class_compat * cls)

unregister a compatibility class

Parameters

struct class_compat * cls
the class to unregister

create a compatibility class device link to a bus device

Parameters

struct class_compat * cls
the compatibility class
struct device * dev
the target bus device
struct device * device_link
an optional device to which a “device” link should be created

remove a compatibility class device link to a bus device

Parameters

struct class_compat * cls
the compatibility class
struct device * dev
the target bus device
struct device * device_link
an optional device to which a “device” link was previously created
void unregister_node(struct node * node)

unregister a node device

Parameters

struct node * node
node going away

Description

Unregisters a node device node. All the devices on the node must be unregistered before calling this function.

int request_firmware(const struct firmware ** firmware_p, const char * name, struct device * device)

send firmware request and wait for it

Parameters

const struct firmware ** firmware_p
pointer to firmware image
const char * name
name of firmware file
struct device * device
device for which firmware is being loaded

Description

firmware_p will be used to return a firmware image by the name of name for device device.

Should be called from user context where sleeping is allowed.

name will be used as $FIRMWARE in the uevent environment and should be distinctive enough not to be confused with any other firmware image for this or any other device.

Caller must hold the reference count of device.

The function can be called safely inside device’s suspend and resume callback.

int request_firmware_direct(const struct firmware ** firmware_p, const char * name, struct device * device)

load firmware directly without usermode helper

Parameters

const struct firmware ** firmware_p
pointer to firmware image
const char * name
name of firmware file
struct device * device
device for which firmware is being loaded

Description

This function works pretty much like request_firmware(), but this doesn’t fall back to usermode helper even if the firmware couldn’t be loaded directly from fs. Hence it’s useful for loading optional firmwares, which aren’t always present, without extra long timeouts of udev.

void release_firmware(const struct firmware * fw)

release the resource associated with a firmware image

Parameters

const struct firmware * fw
firmware resource to release
int request_firmware_nowait(struct module * module, bool uevent, const char * name, struct device * device, gfp_t gfp, void * context, void (*cont) (const struct firmware *fw, void *context)

asynchronous version of request_firmware

Parameters

struct module * module
module requesting the firmware
bool uevent
sends uevent to copy the firmware image if this flag is non-zero else the firmware copy must be done manually.
const char * name
name of firmware file
struct device * device
device for which firmware is being loaded
gfp_t gfp
allocation flags
void * context
will be passed over to cont, and fw may be NULL if firmware request fails.
void (*)(const struct firmware *fw, void *context) cont
function will be called asynchronously when the firmware request is over.

Description

Caller must hold the reference count of device.

Asynchronous variant of request_firmware() for user contexts:
  • sleep for as small periods as possible since it may

increase kernel boot time of built-in device drivers requesting firmware in their ->:c:func:probe() methods, if gfp is GFP_KERNEL.

  • can’t sleep at all if gfp is GFP_ATOMIC.
int transport_class_register(struct transport_class * tclass)

register an initial transport class

Parameters

struct transport_class * tclass
a pointer to the transport class structure to be initialised

Description

The transport class contains an embedded class which is used to identify it. The caller should initialise this structure with zeros and then generic class must have been initialised with the actual transport class unique name. There’s a macro DECLARE_TRANSPORT_CLASS() to do this (declared classes still must be registered).

Returns 0 on success or error on failure.

void transport_class_unregister(struct transport_class * tclass)

unregister a previously registered class

Parameters

struct transport_class * tclass
The transport class to unregister

Description

Must be called prior to deallocating the memory for the transport class.

int anon_transport_class_register(struct anon_transport_class * atc)

register an anonymous class

Parameters

struct anon_transport_class * atc
The anon transport class to register

Description

The anonymous transport class contains both a transport class and a container. The idea of an anonymous class is that it never actually has any device attributes associated with it (and thus saves on container storage). So it can only be used for triggering events. Use prezero and then use DECLARE_ANON_TRANSPORT_CLASS() to initialise the anon transport class storage.

void anon_transport_class_unregister(struct anon_transport_class * atc)

unregister an anon class

Parameters

struct anon_transport_class * atc
Pointer to the anon transport class to unregister

Description

Must be called prior to deallocating the memory for the anon transport class.

void transport_setup_device(struct device * dev)

declare a new dev for transport class association but don’t make it visible yet.

Parameters

struct device * dev
the generic device representing the entity being added

Description

Usually, dev represents some component in the HBA system (either the HBA itself or a device remote across the HBA bus). This routine is simply a trigger point to see if any set of transport classes wishes to associate with the added device. This allocates storage for the class device and initialises it, but does not yet add it to the system or add attributes to it (you do this with transport_add_device). If you have no need for a separate setup and add operations, use transport_register_device (see transport_class.h).

void transport_add_device(struct device * dev)

declare a new dev for transport class association

Parameters

struct device * dev
the generic device representing the entity being added

Description

Usually, dev represents some component in the HBA system (either the HBA itself or a device remote across the HBA bus). This routine is simply a trigger point used to add the device to the system and register attributes for it.

void transport_configure_device(struct device * dev)

configure an already set up device

Parameters

struct device * dev
generic device representing device to be configured

Description

The idea of configure is simply to provide a point within the setup process to allow the transport class to extract information from a device after it has been setup. This is used in SCSI because we have to have a setup device to begin using the HBA, but after we send the initial inquiry, we use configure to extract the device parameters. The device need not have been added to be configured.

void transport_remove_device(struct device * dev)

remove the visibility of a device

Parameters

struct device * dev
generic device to remove

Description

This call removes the visibility of the device (to the user from sysfs), but does not destroy it. To eliminate a device entirely you must also call transport_destroy_device. If you don’t need to do remove and destroy as separate operations, use transport_unregister_device() (see transport_class.h) which will perform both calls for you.

void transport_destroy_device(struct device * dev)

destroy a removed device

Parameters

struct device * dev
device to eliminate from the transport class.

Description

This call triggers the elimination of storage associated with the transport classdev. Note: all it really does is relinquish a reference to the classdev. The memory will not be freed until the last reference goes to zero. Note also that the classdev retains a reference count on dev, so dev too will remain for as long as the transport class device remains around.

int device_bind_driver(struct device * dev)

bind a driver to one device.

Parameters

struct device * dev
device.

Description

Allow manual attachment of a driver to a device. Caller must have already set dev->driver.

Note that this does not modify the bus reference count nor take the bus’s rwsem. Please verify those are accounted for before calling this. (It is ok to call with no other effort from a driver’s probe() method.)

This function must be called with the device lock held.

void wait_for_device_probe(void)

Parameters

void
no arguments

Description

Wait for device probing to be completed.

int device_attach(struct device * dev)

try to attach device to a driver.

Parameters

struct device * dev
device.

Description

Walk the list of drivers that the bus has and call driver_probe_device() for each pair. If a compatible pair is found, break out and return.

Returns 1 if the device was bound to a driver; 0 if no matching driver was found; -ENODEV if the device is not registered.

When called for a USB interface, dev->parent lock must be held.

int driver_attach(struct device_driver * drv)

try to bind driver to devices.

Parameters

struct device_driver * drv
driver.

Description

Walk the list of devices that the bus has on it and try to match the driver with each one. If driver_probe_device() returns 0 and the dev->driver is set, we’ve found a compatible pair.

void device_release_driver(struct device * dev)

manually detach device from driver.

Parameters

struct device * dev
device.

Description

Manually detach device from driver. When called for a USB interface, dev->parent lock must be held.

struct platform_device * platform_device_register_resndata(struct device * parent, const char * name, int id, const struct resource * res, unsigned int num, const void * data, size_t size)

add a platform-level device with resources and platform-specific data

Parameters

struct device * parent
parent device for the device we’re adding
const char * name
base name of the device we’re adding
int id
instance id
const struct resource * res
set of resources that needs to be allocated for the device
unsigned int num
number of resources
const void * data
platform specific data for this platform device
size_t size
size of platform specific data

Description

Returns struct platform_device pointer on success, or ERR_PTR() on error.

struct platform_device * platform_device_register_simple(const char * name, int id, const struct resource * res, unsigned int num)

add a platform-level device and its resources

Parameters

const char * name
base name of the device we’re adding
int id
instance id
const struct resource * res
set of resources that needs to be allocated for the device
unsigned int num
number of resources

Description

This function creates a simple platform device that requires minimal resource and memory management. Canned release function freeing memory allocated for the device allows drivers using such devices to be unloaded without waiting for the last reference to the device to be dropped.

This interface is primarily intended for use with legacy drivers which probe hardware directly. Because such drivers create sysfs device nodes themselves, rather than letting system infrastructure handle such device enumeration tasks, they don’t fully conform to the Linux driver model. In particular, when such drivers are built as modules, they can’t be “hotplugged”.

Returns struct platform_device pointer on success, or ERR_PTR() on error.

struct platform_device * platform_device_register_data(struct device * parent, const char * name, int id, const void * data, size_t size)

add a platform-level device with platform-specific data

Parameters

struct device * parent
parent device for the device we’re adding
const char * name
base name of the device we’re adding
int id
instance id
const void * data
platform specific data for this platform device
size_t size
size of platform specific data

Description

This function creates a simple platform device that requires minimal resource and memory management. Canned release function freeing memory allocated for the device allows drivers using such devices to be unloaded without waiting for the last reference to the device to be dropped.

Returns struct platform_device pointer on success, or ERR_PTR() on error.

struct resource * platform_get_resource(struct platform_device * dev, unsigned int type, unsigned int num)

get a resource for a device

Parameters

struct platform_device * dev
platform device
unsigned int type
resource type
unsigned int num
resource index
int platform_get_irq(struct platform_device * dev, unsigned int num)

get an IRQ for a device

Parameters

struct platform_device * dev
platform device
unsigned int num
IRQ number index
int platform_irq_count(struct platform_device * dev)

Count the number of IRQs a platform device uses

Parameters

struct platform_device * dev
platform device

Return

Number of IRQs a platform device uses or EPROBE_DEFER

struct resource * platform_get_resource_byname(struct platform_device * dev, unsigned int type, const char * name)

get a resource for a device by name

Parameters

struct platform_device * dev
platform device
unsigned int type
resource type
const char * name
resource name
int platform_get_irq_byname(struct platform_device * dev, const char * name)

get an IRQ for a device by name

Parameters

struct platform_device * dev
platform device
const char * name
IRQ name
int platform_add_devices(struct platform_device ** devs, int num)

add a numbers of platform devices

Parameters

struct platform_device ** devs
array of platform devices to add
int num
number of platform devices in array
void platform_device_put(struct platform_device * pdev)

destroy a platform device

Parameters

struct platform_device * pdev
platform device to free

Description

Free all memory associated with a platform device. This function must _only_ be externally called in error cases. All other usage is a bug.

struct platform_device * platform_device_alloc(const char * name, int id)

create a platform device

Parameters

const char * name
base name of the device we’re adding
int id
instance id

Description

Create a platform device object which can have other objects attached to it, and which will have attached objects freed when it is released.

int platform_device_add_resources(struct platform_device * pdev, const struct resource * res, unsigned int num)

add resources to a platform device

Parameters

struct platform_device * pdev
platform device allocated by platform_device_alloc to add resources to
const struct resource * res
set of resources that needs to be allocated for the device
unsigned int num
number of resources

Description

Add a copy of the resources to the platform device. The memory associated with the resources will be freed when the platform device is released.

int platform_device_add_data(struct platform_device * pdev, const void * data, size_t size)

add platform-specific data to a platform device

Parameters

struct platform_device * pdev
platform device allocated by platform_device_alloc to add resources to
const void * data
platform specific data for this platform device
size_t size
size of platform specific data

Description

Add a copy of platform specific data to the platform device’s platform_data pointer. The memory associated with the platform data will be freed when the platform device is released.

int platform_device_add_properties(struct platform_device * pdev, struct property_entry * properties)

add built-in properties to a platform device

Parameters

struct platform_device * pdev
platform device to add properties to
struct property_entry * properties
null terminated array of properties to add

Description

The function will take deep copy of properties and attach the copy to the platform device. The memory associated with properties will be freed when the platform device is released.

int platform_device_add(struct platform_device * pdev)

add a platform device to device hierarchy

Parameters

struct platform_device * pdev
platform device we’re adding

Description

This is part 2 of platform_device_register(), though may be called separately _iff_ pdev was allocated by platform_device_alloc().

void platform_device_del(struct platform_device * pdev)

remove a platform-level device

Parameters

struct platform_device * pdev
platform device we’re removing

Description

Note that this function will also release all memory- and port-based resources owned by the device (dev->resource). This function must _only_ be externally called in error cases. All other usage is a bug.

int platform_device_register(struct platform_device * pdev)

add a platform-level device

Parameters

struct platform_device * pdev
platform device we’re adding
void platform_device_unregister(struct platform_device * pdev)

unregister a platform-level device

Parameters

struct platform_device * pdev
platform device we’re unregistering

Description

Unregistration is done in 2 steps. First we release all resources and remove it from the subsystem, then we drop reference count by calling platform_device_put().

struct platform_device * platform_device_register_full(const struct platform_device_info * pdevinfo)

add a platform-level device with resources and platform-specific data

Parameters

const struct platform_device_info * pdevinfo
data used to create device

Description

Returns struct platform_device pointer on success, or ERR_PTR() on error.

int __platform_driver_register(struct platform_driver * drv, struct module * owner)

register a driver for platform-level devices

Parameters

struct platform_driver * drv
platform driver structure
struct module * owner
owning module/driver
void platform_driver_unregister(struct platform_driver * drv)

unregister a driver for platform-level devices

Parameters

struct platform_driver * drv
platform driver structure
int __platform_driver_probe(struct platform_driver * drv, int (*probe) (struct platform_device *, struct module * module)

register driver for non-hotpluggable device

Parameters

struct platform_driver * drv
platform driver structure
int (*)(struct platform_device *) probe
the driver probe routine, probably from an __init section
struct module * module
module which will be the owner of the driver

Description

Use this instead of platform_driver_register() when you know the device is not hotpluggable and has already been registered, and you want to remove its run-once probe() infrastructure from memory after the driver has bound to the device.

One typical use for this would be with drivers for controllers integrated into system-on-chip processors, where the controller devices have been configured as part of board setup.

Note that this is incompatible with deferred probing.

Returns zero if the driver registered and bound to a device, else returns a negative error code and with the driver not registered.

struct platform_device * __platform_create_bundle(struct platform_driver * driver, int (*probe) (struct platform_device *, struct resource * res, unsigned int n_res, const void * data, size_t size, struct module * module)

register driver and create corresponding device

Parameters

struct platform_driver * driver
platform driver structure
int (*)(struct platform_device *) probe
the driver probe routine, probably from an __init section
struct resource * res
set of resources that needs to be allocated for the device
unsigned int n_res
number of resources
const void * data
platform specific data for this platform device
size_t size
size of platform specific data
struct module * module
module which will be the owner of the driver

Description

Use this in legacy-style modules that probe hardware directly and register a single platform device and corresponding platform driver.

Returns struct platform_device pointer on success, or ERR_PTR() on error.

int __platform_register_drivers(struct platform_driver *const * drivers, unsigned int count, struct module * owner)

register an array of platform drivers

Parameters

struct platform_driver *const * drivers
an array of drivers to register
unsigned int count
the number of drivers to register
struct module * owner
module owning the drivers

Description

Registers platform drivers specified by an array. On failure to register a driver, all previously registered drivers will be unregistered. Callers of this API should use platform_unregister_drivers() to unregister drivers in the reverse order.

Return

0 on success or a negative error code on failure.

void platform_unregister_drivers(struct platform_driver *const * drivers, unsigned int count)

unregister an array of platform drivers

Parameters

struct platform_driver *const * drivers
an array of drivers to unregister
unsigned int count
the number of drivers to unregister

Description

Unegisters platform drivers specified by an array. This is typically used to complement an earlier call to platform_register_drivers(). Drivers are unregistered in the reverse order in which they were registered.

int bus_for_each_dev(struct bus_type * bus, struct device * start, void * data, int (*fn) (struct device *, void *)

device iterator.

Parameters

struct bus_type * bus
bus type.
struct device * start
device to start iterating from.
void * data
data for the callback.
int (*)(struct device *, void *) fn
function to be called for each device.

Description

Iterate over bus‘s list of devices, and call fn for each, passing it data. If start is not NULL, we use that device to begin iterating from.

We check the return of fn each time. If it returns anything other than 0, we break out and return that value.

NOTE

The device that returns a non-zero value is not retained in any way, nor is its refcount incremented. If the caller needs to retain this data, it should do so, and increment the reference count in the supplied callback.

struct device * bus_find_device(struct bus_type * bus, struct device * start, void * data, int (*match) (struct device *dev, void *data)

device iterator for locating a particular device.

Parameters

struct bus_type * bus
bus type
struct device * start
Device to begin with
void * data
Data to pass to match function
int (*)(struct device *dev, void *data) match
Callback function to check device

Description

This is similar to the bus_for_each_dev() function above, but it returns a reference to a device that is ‘found’ for later use, as determined by the match callback.

The callback should return 0 if the device doesn’t match and non-zero if it does. If the callback returns non-zero, this function will return to the caller and not iterate over any more devices.

struct device * bus_find_device_by_name(struct bus_type * bus, struct device * start, const char * name)

device iterator for locating a particular device of a specific name

Parameters

struct bus_type * bus
bus type
struct device * start
Device to begin with
const char * name
name of the device to match

Description

This is similar to the bus_find_device() function above, but it handles searching by a name automatically, no need to write another strcmp matching function.

struct device * subsys_find_device_by_id(struct bus_type * subsys, unsigned int id, struct device * hint)

find a device with a specific enumeration number

Parameters

struct bus_type * subsys
subsystem
unsigned int id
index ‘id’ in struct device
struct device * hint
device to check first

Description

Check the hint’s next object and if it is a match return it directly, otherwise, fall back to a full list search. Either way a reference for the returned object is taken.

int bus_for_each_drv(struct bus_type * bus, struct device_driver * start, void * data, int (*fn) (struct device_driver *, void *)

driver iterator

Parameters

struct bus_type * bus
bus we’re dealing with.
struct device_driver * start
driver to start iterating on.
void * data
data to pass to the callback.
int (*)(struct device_driver *, void *) fn
function to call for each driver.

Description

This is nearly identical to the device iterator above. We iterate over each driver that belongs to bus, and call fn for each. If fn returns anything but 0, we break out and return it. If start is not NULL, we use it as the head of the list.

NOTE

we don’t return the driver that returns a non-zero value, nor do we leave the reference count incremented for that driver. If the caller needs to know that info, it must set it in the callback. It must also be sure to increment the refcount so it doesn’t disappear before returning to the caller.

int bus_rescan_devices(struct bus_type * bus)

rescan devices on the bus for possible drivers

Parameters

struct bus_type * bus
the bus to scan.

Description

This function will look for devices on the bus with no driver attached and rescan it against existing drivers to see if it matches any by calling device_attach() for the unbound devices.

int device_reprobe(struct device * dev)

remove driver for a device and probe for a new driver

Parameters

struct device * dev
the device to reprobe

Description

This function detaches the attached driver (if any) for the given device and restarts the driver probing process. It is intended to use if probing criteria changed during a devices lifetime and driver attachment should change accordingly.

int bus_register(struct bus_type * bus)

register a driver-core subsystem

Parameters

struct bus_type * bus
bus to register

Description

Once we have that, we register the bus with the kobject infrastructure, then register the children subsystems it has: the devices and drivers that belong to the subsystem.

void bus_unregister(struct bus_type * bus)

remove a bus from the system

Parameters

struct bus_type * bus
bus.

Description

Unregister the child subsystems and the bus itself. Finally, we call bus_put() to release the refcount

void subsys_dev_iter_init(struct subsys_dev_iter * iter, struct bus_type * subsys, struct device * start, const struct device_type * type)

initialize subsys device iterator

Parameters

struct subsys_dev_iter * iter
subsys iterator to initialize
struct bus_type * subsys
the subsys we wanna iterate over
struct device * start
the device to start iterating from, if any
const struct device_type * type
device_type of the devices to iterate over, NULL for all

Description

Initialize subsys iterator iter such that it iterates over devices of subsys. If start is set, the list iteration will start there, otherwise if it is NULL, the iteration starts at the beginning of the list.

struct device * subsys_dev_iter_next(struct subsys_dev_iter * iter)

iterate to the next device

Parameters

struct subsys_dev_iter * iter
subsys iterator to proceed

Description

Proceed iter to the next device and return it. Returns NULL if iteration is complete.

The returned device is referenced and won’t be released till iterator is proceed to the next device or exited. The caller is free to do whatever it wants to do with the device including calling back into subsys code.

void subsys_dev_iter_exit(struct subsys_dev_iter * iter)

finish iteration

Parameters

struct subsys_dev_iter * iter
subsys iterator to finish

Description

Finish an iteration. Always call this function after iteration is complete whether the iteration ran till the end or not.

int subsys_system_register(struct bus_type * subsys, const struct attribute_group ** groups)

register a subsystem at /sys/devices/system/

Parameters

struct bus_type * subsys
system subsystem
const struct attribute_group ** groups
default attributes for the root device

Description

All ‘system’ subsystems have a /sys/devices/system/<name> root device with the name of the subsystem. The root device can carry subsystem- wide attributes. All registered devices are below this single root device and are named after the subsystem with a simple enumeration number appended. The registered devices are not explicitly named; only ‘id’ in the device needs to be set.

Do not use this interface for anything new, it exists for compatibility with bad ideas only. New subsystems should use plain subsystems; and add the subsystem-wide attributes should be added to the subsystem directory itself and not some create fake root-device placed in /sys/devices/system/<name>.

int subsys_virtual_register(struct bus_type * subsys, const struct attribute_group ** groups)

register a subsystem at /sys/devices/virtual/

Parameters

struct bus_type * subsys
virtual subsystem
const struct attribute_group ** groups
default attributes for the root device

Description

All ‘virtual’ subsystems have a /sys/devices/system/<name> root device with the name of the subystem. The root device can carry subsystem-wide attributes. All registered devices are below this single root device. There’s no restriction on device naming. This is for kernel software constructs which need sysfs interface.

Device Drivers DMA Management

struct dma_buf * dma_buf_export(const struct dma_buf_export_info * exp_info)

Creates a new dma_buf, and associates an anon file with this buffer, so it can be exported. Also connect the allocator specific data and ops to the buffer. Additionally, provide a name string for exporter; useful in debugging.

Parameters

const struct dma_buf_export_info * exp_info
[in] holds all the export related information provided by the exporter. see struct dma_buf_export_info for further details.

Description

Returns, on success, a newly created dma_buf object, which wraps the supplied private data and operations for dma_buf_ops. On either missing ops, or error in allocating struct dma_buf, will return negative error.

int dma_buf_fd(struct dma_buf * dmabuf, int flags)

returns a file descriptor for the given dma_buf

Parameters

struct dma_buf * dmabuf
[in] pointer to dma_buf for which fd is required.
int flags
[in] flags to give to fd

Description

On success, returns an associated ‘fd’. Else, returns error.

struct dma_buf * dma_buf_get(int fd)

returns the dma_buf structure related to an fd

Parameters

int fd
[in] fd associated with the dma_buf to be returned

Description

On success, returns the dma_buf structure associated with an fd; uses file’s refcounting done by fget to increase refcount. returns ERR_PTR otherwise.

void dma_buf_put(struct dma_buf * dmabuf)

decreases refcount of the buffer

Parameters

struct dma_buf * dmabuf
[in] buffer to reduce refcount of

Description

Uses file’s refcounting done implicitly by fput()

struct dma_buf_attachment * dma_buf_attach(struct dma_buf * dmabuf, struct device * dev)

Add the device to dma_buf’s attachments list; optionally, calls attach() of dma_buf_ops to allow device-specific attach functionality

Parameters

struct dma_buf * dmabuf
[in] buffer to attach device to.
struct device * dev
[in] device to be attached.

Description

Returns struct dma_buf_attachment * for this attachment; returns ERR_PTR on error.

void dma_buf_detach(struct dma_buf * dmabuf, struct dma_buf_attachment * attach)

Remove the given attachment from dmabuf’s attachments list; optionally calls detach() of dma_buf_ops for device-specific detach

Parameters

struct dma_buf * dmabuf
[in] buffer to detach from.
struct dma_buf_attachment * attach
[in] attachment to be detached; is free’d after this call.
struct sg_table * dma_buf_map_attachment(struct dma_buf_attachment * attach, enum dma_data_direction direction)

Returns the scatterlist table of the attachment; mapped into _device_ address space. Is a wrapper for map_dma_buf() of the dma_buf_ops.

Parameters

struct dma_buf_attachment * attach
[in] attachment whose scatterlist is to be returned
enum dma_data_direction direction
[in] direction of DMA transfer

Description

Returns sg_table containing the scatterlist to be returned; returns ERR_PTR on error.

void dma_buf_unmap_attachment(struct dma_buf_attachment * attach, struct sg_table * sg_table, enum dma_data_direction direction)

unmaps and decreases usecount of the buffer;might deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of dma_buf_ops.

Parameters

struct dma_buf_attachment * attach
[in] attachment to unmap buffer from
struct sg_table * sg_table
[in] scatterlist info of the buffer to unmap
enum dma_data_direction direction
[in] direction of DMA transfer
int dma_buf_begin_cpu_access(struct dma_buf * dmabuf, enum dma_data_direction direction)

Must be called before accessing a dma_buf from the cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific preparations. Coherency is only guaranteed in the specified range for the specified access direction.

Parameters

struct dma_buf * dmabuf
[in] buffer to prepare cpu access for.
enum dma_data_direction direction
[in] length of range for cpu access.

Description

Can return negative error values, returns 0 on success.

int dma_buf_end_cpu_access(struct dma_buf * dmabuf, enum dma_data_direction direction)

Must be called after accessing a dma_buf from the cpu in the kernel context. Calls end_cpu_access to allow exporter-specific actions. Coherency is only guaranteed in the specified range for the specified access direction.

Parameters

struct dma_buf * dmabuf
[in] buffer to complete cpu access for.
enum dma_data_direction direction
[in] length of range for cpu access.

Description

Can return negative error values, returns 0 on success.

void * dma_buf_kmap_atomic(struct dma_buf * dmabuf, unsigned long page_num)

Map a page of the buffer object into kernel address space. The same restrictions as for kmap_atomic and friends apply.

Parameters

struct dma_buf * dmabuf
[in] buffer to map page from.
unsigned long page_num
[in] page in PAGE_SIZE units to map.

Description

This call must always succeed, any necessary preparations that might fail need to be done in begin_cpu_access.

void dma_buf_kunmap_atomic(struct dma_buf * dmabuf, unsigned long page_num, void * vaddr)

Unmap a page obtained by dma_buf_kmap_atomic.

Parameters

struct dma_buf * dmabuf
[in] buffer to unmap page from.
unsigned long page_num
[in] page in PAGE_SIZE units to unmap.
void * vaddr
[in] kernel space pointer obtained from dma_buf_kmap_atomic.

Description

This call must always succeed.

void * dma_buf_kmap(struct dma_buf * dmabuf, unsigned long page_num)

Map a page of the buffer object into kernel address space. The same restrictions as for kmap and friends apply.

Parameters

struct dma_buf * dmabuf
[in] buffer to map page from.
unsigned long page_num
[in] page in PAGE_SIZE units to map.

Description

This call must always succeed, any necessary preparations that might fail need to be done in begin_cpu_access.

void dma_buf_kunmap(struct dma_buf * dmabuf, unsigned long page_num, void * vaddr)

Unmap a page obtained by dma_buf_kmap.

Parameters

struct dma_buf * dmabuf
[in] buffer to unmap page from.
unsigned long page_num
[in] page in PAGE_SIZE units to unmap.
void * vaddr
[in] kernel space pointer obtained from dma_buf_kmap.

Description

This call must always succeed.

int dma_buf_mmap(struct dma_buf * dmabuf, struct vm_area_struct * vma, unsigned long pgoff)

Setup up a userspace mmap with the given vma

Parameters

struct dma_buf * dmabuf
[in] buffer that should back the vma
struct vm_area_struct * vma
[in] vma for the mmap
unsigned long pgoff
[in] offset in pages where this mmap should start within the dma-buf buffer.

Description

This function adjusts the passed in vma so that it points at the file of the dma_buf operation. It also adjusts the starting pgoff and does bounds checking on the size of the vma. Then it calls the exporters mmap function to set up the mapping.

Can return negative error values, returns 0 on success.

void * dma_buf_vmap(struct dma_buf * dmabuf)

Create virtual mapping for the buffer object into kernel address space. Same restrictions as for vmap and friends apply.

Parameters

struct dma_buf * dmabuf
[in] buffer to vmap

Description

This call may fail due to lack of virtual mapping address space. These calls are optional in drivers. The intended use for them is for mapping objects linear in kernel space for high use objects. Please attempt to use kmap/kunmap before thinking about these interfaces.

Returns NULL on error.

void dma_buf_vunmap(struct dma_buf * dmabuf, void * vaddr)

Unmap a vmap obtained by dma_buf_vmap.

Parameters

struct dma_buf * dmabuf
[in] buffer to vunmap
void * vaddr
[in] vmap to vunmap
unsigned fence_context_alloc(unsigned num)

allocate an array of fence contexts

Parameters

unsigned num
[in] amount of contexts to allocate

Description

This function will return the first index of the number of fences allocated. The fence context is used for setting fence->context to a unique number.

int fence_signal_locked(struct fence * fence)

signal completion of a fence

Parameters

struct fence * fence
the fence to signal

Description

Signal completion for software callbacks on a fence, this will unblock fence_wait() calls and run all the callbacks added with fence_add_callback(). Can be called multiple times, but since a fence can only go from unsignaled to signaled state, it will only be effective the first time.

Unlike fence_signal, this function must be called with fence->lock held.

int fence_signal(struct fence * fence)

signal completion of a fence

Parameters

struct fence * fence
the fence to signal

Description

Signal completion for software callbacks on a fence, this will unblock fence_wait() calls and run all the callbacks added with fence_add_callback(). Can be called multiple times, but since a fence can only go from unsignaled to signaled state, it will only be effective the first time.

signed long fence_wait_timeout(struct fence * fence, bool intr, signed long timeout)

sleep until the fence gets signaled or until timeout elapses

Parameters

struct fence * fence
[in] the fence to wait on
bool intr
[in] if true, do an interruptible wait
signed long timeout
[in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT

Description

Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the remaining timeout in jiffies on success. Other error values may be returned on custom implementations.

Performs a synchronous wait on this fence. It is assumed the caller directly or indirectly (buf-mgr between reservation and committing) holds a reference to the fence, otherwise the fence might be freed before return, resulting in undefined behavior.

void fence_enable_sw_signaling(struct fence * fence)

enable signaling on fence

Parameters

struct fence * fence
[in] the fence to enable

Description

this will request for sw signaling to be enabled, to make the fence complete as soon as possible

int fence_add_callback(struct fence * fence, struct fence_cb * cb, fence_func_t func)

add a callback to be called when the fence is signaled

Parameters

struct fence * fence
[in] the fence to wait on
struct fence_cb * cb
[in] the callback to register
fence_func_t func
[in] the function to call

Description

cb will be initialized by fence_add_callback, no initialization by the caller is required. Any number of callbacks can be registered to a fence, but a callback can only be registered to one fence at a time.

Note that the callback can be called from an atomic context. If fence is already signaled, this function will return -ENOENT (and not call the callback)

Add a software callback to the fence. Same restrictions apply to refcount as it does to fence_wait, however the caller doesn’t need to keep a refcount to fence afterwards: when software access is enabled, the creator of the fence is required to keep the fence alive until after it signals with fence_signal. The callback itself can be called from irq context.

bool fence_remove_callback(struct fence * fence, struct fence_cb * cb)

remove a callback from the signaling list

Parameters

struct fence * fence
[in] the fence to wait on
struct fence_cb * cb
[in] the callback to remove

Description

Remove a previously queued callback from the fence. This function returns true if the callback is successfully removed, or false if the fence has already been signaled.

WARNING: Cancelling a callback should only be done if you really know what you’re doing, since deadlocks and race conditions could occur all too easily. For this reason, it should only ever be done on hardware lockup recovery, with a reference held to the fence.

signed long fence_default_wait(struct fence * fence, bool intr, signed long timeout)

default sleep until the fence gets signaled or until timeout elapses

Parameters

struct fence * fence
[in] the fence to wait on
bool intr
[in] if true, do an interruptible wait
signed long timeout
[in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT

Description

Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the remaining timeout in jiffies on success.

signed long fence_wait_any_timeout(struct fence ** fences, uint32_t count, bool intr, signed long timeout)

sleep until any fence gets signaled or until timeout elapses

Parameters

struct fence ** fences
[in] array of fences to wait on
uint32_t count
[in] number of fences to wait on
bool intr
[in] if true, do an interruptible wait
signed long timeout
[in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT

Description

Returns -EINVAL on custom fence wait implementation, -ERESTARTSYS if interrupted, 0 if the wait timed out, or the remaining timeout in jiffies on success.

Synchronous waits for the first fence in the array to be signaled. The caller needs to hold a reference to all fences in the array, otherwise a fence might be freed before return, resulting in undefined behavior.

void fence_init(struct fence * fence, const struct fence_ops * ops, spinlock_t * lock, unsigned context, unsigned seqno)

Initialize a custom fence.

Parameters

struct fence * fence
[in] the fence to initialize
const struct fence_ops * ops
[in] the fence_ops for operations on this fence
spinlock_t * lock
[in] the irqsafe spinlock to use for locking this fence
unsigned context
[in] the execution context this fence is run on
unsigned seqno
[in] a linear increasing sequence number for this context

Description

Initializes an allocated fence, the caller doesn’t have to keep its refcount after committing with this fence, but it will need to hold a refcount again if fence_ops.enable_signaling gets called. This can be used for other implementing other types of fence.

context and seqno are used for easy comparison between fences, allowing to check which fence is later by simply using fence_later.

struct fence

software synchronization primitive

Definition

struct fence {
  struct kref refcount;
  const struct fence_ops * ops;
  struct rcu_head rcu;
  struct list_head cb_list;
  spinlock_t * lock;
  unsigned context;
  unsigned seqno;
  unsigned long flags;
  ktime_t timestamp;
  int status;
};

Members

struct kref refcount
refcount for this fence
const struct fence_ops * ops
fence_ops associated with this fence
struct rcu_head rcu
used for releasing fence with kfree_rcu
struct list_head cb_list
list of all callbacks to call
spinlock_t * lock
spin_lock_irqsave used for locking
unsigned context
execution context this fence belongs to, returned by fence_context_alloc()
unsigned seqno
the sequence number of this fence inside the execution context, can be compared to decide which fence would be signaled later.
unsigned long flags
A mask of FENCE_FLAG_* defined below
ktime_t timestamp
Timestamp when the fence was signaled.
int status
Optional, only valid if < 0, must be set before calling fence_signal, indicates that the fence has completed with an error.

Description

the flags member must be manipulated and read using the appropriate atomic ops (bit_*), so taking the spinlock will not be needed most of the time.

FENCE_FLAG_SIGNALED_BIT - fence is already signaled FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the implementer of the fence for its own purposes. Can be used in different ways by different fence implementers, so do not rely on this.

*) Since atomic bitops are used, this is not guaranteed to be the case. Particularly, if the bit was set, but fence_signal was called right before this bit was set, it would have been able to set the FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. Adding a check for FENCE_FLAG_SIGNALED_BIT after setting FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that after fence_signal was called, any enable_signaling call will have either been completed, or never called at all.

struct fence_cb

callback for fence_add_callback

Definition

struct fence_cb {
  struct list_head node;
  fence_func_t func;
};

Members

struct list_head node
used by fence_add_callback to append this struct to fence::cb_list
fence_func_t func
fence_func_t to call

Description

This struct will be initialized by fence_add_callback, additional data can be passed along by embedding fence_cb in another struct.

struct fence_ops

operations implemented for fence

Definition

struct fence_ops {
  const char * (* get_driver_name) (struct fence *fence);
  const char * (* get_timeline_name) (struct fence *fence);
  bool (* enable_signaling) (struct fence *fence);
  bool (* signaled) (struct fence *fence);
  signed long (* wait) (struct fence *fence, bool intr, signed long timeout);
  void (* release) (struct fence *fence);
  int (* fill_driver_data) (struct fence *fence, void *data, int size);
  void (* fence_value_str) (struct fence *fence, char *str, int size);
  void (* timeline_value_str) (struct fence *fence, char *str, int size);
};

Members

const char * (*)(struct fence *fence) get_driver_name
returns the driver name.
const char * (*)(struct fence *fence) get_timeline_name
return the name of the context this fence belongs to.
bool (*)(struct fence *fence) enable_signaling
enable software signaling of fence.
bool (*)(struct fence *fence) signaled
[optional] peek whether the fence is signaled, can be null.
signed long (*)(struct fence *fence, bool intr, signed long timeout) wait
custom wait implementation, or fence_default_wait.
void (*)(struct fence *fence) release
[optional] called on destruction of fence, can be null
int (*)(struct fence *fence, void *data, int size) fill_driver_data
[optional] callback to fill in free-form debug info Returns amount of bytes filled, or -errno.
void (*)(struct fence *fence, char *str, int size) fence_value_str
[optional] fills in the value of the fence as a string
void (*)(struct fence *fence, char *str, int size) timeline_value_str
[optional] fills in the current value of the timeline as a string

Description

Notes on enable_signaling: For fence implementations that have the capability for hw->hw signaling, they can implement this op to enable the necessary irqs, or insert commands into cmdstream, etc. This is called in the first wait() or add_callback() path to let the fence implementation know that there is another driver waiting on the signal (ie. hw->sw case).

This function can be called called from atomic context, but not from irq context, so normal spinlocks can be used.

A return value of false indicates the fence already passed, or some failure occurred that made it impossible to enable signaling. True indicates successful enabling.

fence->status may be set in enable_signaling, but only when false is returned.

Calling fence_signal before enable_signaling is called allows for a tiny race window in which enable_signaling is called during, before, or after fence_signal. To fight this, it is recommended that before enable_signaling returns true an extra reference is taken on the fence, to be released when the fence is signaled. This will mean fence_signal will still be called twice, but the second time will be a noop since it was already signaled.

Notes on signaled: May set fence->status if returning true.

Notes on wait: Must not be NULL, set to fence_default_wait for default implementation. the fence_default_wait implementation should work for any fence, as long as enable_signaling works correctly.

Must return -ERESTARTSYS if the wait is intr = true and the wait was interrupted, and remaining jiffies if fence has signaled, or 0 if wait timed out. Can also return other error values on custom implementations, which should be treated as if the fence is signaled. For example a hardware lockup could be reported like that.

Notes on release: Can be NULL, this function allows additional commands to run on destruction of the fence. Can be called from irq context. If pointer is set to NULL, kfree will get called instead.

struct fence * fence_get(struct fence * fence)

increases refcount of the fence

Parameters

struct fence * fence
[in] fence to increase refcount of

Description

Returns the same fence, with refcount increased by 1.

struct fence * fence_get_rcu(struct fence * fence)

get a fence from a reservation_object_list with rcu read lock

Parameters

struct fence * fence
[in] fence to increase refcount of

Description

Function returns NULL if no refcount could be obtained, or the fence.

void fence_put(struct fence * fence)

decreases refcount of the fence

Parameters

struct fence * fence
[in] fence to reduce refcount of
bool fence_is_signaled_locked(struct fence * fence)

Return an indication if the fence is signaled yet.

Parameters

struct fence * fence
[in] the fence to check

Description

Returns true if the fence was already signaled, false if not. Since this function doesn’t enable signaling, it is not guaranteed to ever return true if fence_add_callback, fence_wait or fence_enable_sw_signaling haven’t been called before.

This function requires fence->lock to be held.

bool fence_is_signaled(struct fence * fence)

Return an indication if the fence is signaled yet.

Parameters

struct fence * fence
[in] the fence to check

Description

Returns true if the fence was already signaled, false if not. Since this function doesn’t enable signaling, it is not guaranteed to ever return true if fence_add_callback, fence_wait or fence_enable_sw_signaling haven’t been called before.

It’s recommended for seqno fences to call fence_signal when the operation is complete, it makes it possible to prevent issues from wraparound between time of issue and time of use by checking the return value of this function before calling hardware-specific wait instructions.

bool fence_is_later(struct fence * f1, struct fence * f2)

return if f1 is chronologically later than f2

Parameters

struct fence * f1
[in] the first fence from the same context
struct fence * f2
[in] the second fence from the same context

Description

Returns true if f1 is chronologically later than f2. Both fences must be from the same context, since a seqno is not re-used across contexts.

struct fence * fence_later(struct fence * f1, struct fence * f2)

return the chronologically later fence

Parameters

struct fence * f1
[in] the first fence from the same context
struct fence * f2
[in] the second fence from the same context

Description

Returns NULL if both fences are signaled, otherwise the fence that would be signaled last. Both fences must be from the same context, since a seqno is not re-used across contexts.

signed long fence_wait(struct fence * fence, bool intr)

sleep until the fence gets signaled

Parameters

struct fence * fence
[in] the fence to wait on
bool intr
[in] if true, do an interruptible wait

Description

This function will return -ERESTARTSYS if interrupted by a signal, or 0 if the fence was signaled. Other error values may be returned on custom implementations.

Performs a synchronous wait on this fence. It is assumed the caller directly or indirectly holds a reference to the fence, otherwise the fence might be freed before return, resulting in undefined behavior.

struct seqno_fence * to_seqno_fence(struct fence * fence)

cast a fence to a seqno_fence

Parameters

struct fence * fence
fence to cast to a seqno_fence

Description

Returns NULL if the fence is not a seqno_fence, or the seqno_fence otherwise.

void seqno_fence_init(struct seqno_fence * fence, spinlock_t * lock, struct dma_buf * sync_buf, uint32_t context, uint32_t seqno_ofs, uint32_t seqno, enum seqno_fence_condition cond, const struct fence_ops * ops)

initialize a seqno fence

Parameters

struct seqno_fence * fence
seqno_fence to initialize
spinlock_t * lock
pointer to spinlock to use for fence
struct dma_buf * sync_buf
buffer containing the memory location to signal on
uint32_t context
the execution context this fence is a part of
uint32_t seqno_ofs
the offset within sync_buf
uint32_t seqno
the sequence # to signal on
enum seqno_fence_condition cond
fence wait condition
const struct fence_ops * ops
the fence_ops for operations on this seqno fence

Description

This function initializes a struct seqno_fence with passed parameters, and takes a reference on sync_buf which is released on fence destruction.

A seqno_fence is a dma_fence which can complete in software when enable_signaling is called, but it also completes when (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true

The seqno_fence will take a refcount on the sync_buf until it’s destroyed, but actual lifetime of sync_buf may be longer if one of the callers take a reference to it.

Certain hardware have instructions to insert this type of wait condition in the command stream, so no intervention from software would be needed. This type of fence can be destroyed before completed, however a reference on the sync_buf dma-buf can be taken. It is encouraged to re-use the same dma-buf for sync_buf, since mapping or unmapping the sync_buf to the device’s vm can be expensive.

It is recommended for creators of seqno_fence to call fence_signal before destruction. This will prevent possible issues from wraparound at time of issue vs time of check, since users can check fence_is_signaled before submitting instructions for the hardware to wait on the fence. However, when ops.enable_signaling is not called, it doesn’t have to be done as soon as possible, just before there’s any real danger of seqno wraparound.

struct sync_file * sync_file_create(struct fence * fence)

creates a sync file

Parameters

struct fence * fence
fence to add to the sync_fence

Description

Creates a sync_file containg fence. Once this is called, the sync_file takes ownership of fence. The sync_file can be released with fput(sync_file->file). Returns the sync_file or NULL in case of error.

struct sync_file

sync file to export to the userspace

Definition

struct sync_file {
  struct file * file;
  struct kref kref;
  char name[32];
#ifdef CONFIG_DEBUG_FS
  struct list_head sync_file_list;
#endif
  int num_fences;
  wait_queue_head_t wq;
  atomic_t status;
  struct sync_file_cb cbs[];
};

Members

struct file * file
file representing this fence
struct kref kref
reference count on fence.
char name[32]
name of sync_file. Useful for debugging
struct list_head sync_file_list
membership in global file list
int num_fences
number of sync_pts in the fence
wait_queue_head_t wq
wait queue for fence signaling
atomic_t status
0: signaled, >0:active, <0: error
struct sync_file_cb cbs[]
sync_pts callback information
int dma_alloc_from_coherent(struct device * dev, ssize_t size, dma_addr_t * dma_handle, void ** ret)

try to allocate memory from the per-device coherent area

Parameters

struct device * dev
device from which we allocate memory
ssize_t size
size of requested memory area
dma_addr_t * dma_handle
This will be filled with the correct dma handle
void ** ret
This pointer will be filled with the virtual address to allocated area.

Description

This function should be only called from per-arch dma_alloc_coherent() to support allocation from per-device coherent memory pools.

Returns 0 if dma_alloc_coherent should continue with allocating from generic memory areas, or !0 if dma_alloc_coherent should return ret.

int dma_release_from_coherent(struct device * dev, int order, void * vaddr)

try to free the memory allocated from per-device coherent memory pool

Parameters

struct device * dev
device from which the memory was allocated
int order
the order of pages allocated
void * vaddr
virtual address of allocated pages

Description

This checks whether the memory was allocated from the per-device coherent memory pool and if so, releases that memory.

Returns 1 if we correctly released the memory, or 0 if dma_release_coherent() should proceed with releasing memory from generic pools.

int dma_mmap_from_coherent(struct device * dev, struct vm_area_struct * vma, void * vaddr, size_t size, int * ret)

try to mmap the memory allocated from per-device coherent memory pool to userspace

Parameters

struct device * dev
device from which the memory was allocated
struct vm_area_struct * vma
vm_area for the userspace memory
void * vaddr
cpu address returned by dma_alloc_from_coherent
size_t size
size of the memory buffer allocated by dma_alloc_from_coherent
int * ret
result from remap_pfn_range()

Description

This checks whether the memory was allocated from the per-device coherent memory pool and if so, maps that memory to the provided vma.

Returns 1 if we correctly mapped the memory, or 0 if the caller should proceed with mapping memory from generic pools.

void * dmam_alloc_coherent(struct device * dev, size_t size, dma_addr_t * dma_handle, gfp_t gfp)

Managed dma_alloc_coherent()

Parameters

struct device * dev
Device to allocate coherent memory for
size_t size
Size of allocation
dma_addr_t * dma_handle
Out argument for allocated DMA handle
gfp_t gfp
Allocation flags

Description

Managed dma_alloc_coherent(). Memory allocated using this function will be automatically released on driver detach.

Return

Pointer to allocated memory on success, NULL on failure.

void dmam_free_coherent(struct device * dev, size_t size, void * vaddr, dma_addr_t dma_handle)

Managed dma_free_coherent()

Parameters

struct device * dev
Device to free coherent memory for
size_t size
Size of allocation
void * vaddr
Virtual address of the memory to free
dma_addr_t dma_handle
DMA handle of the memory to free

Description

Managed dma_free_coherent().

void * dmam_alloc_noncoherent(struct device * dev, size_t size, dma_addr_t * dma_handle, gfp_t gfp)

Managed dma_alloc_non_coherent()

Parameters

struct device * dev
Device to allocate non_coherent memory for
size_t size
Size of allocation
dma_addr_t * dma_handle
Out argument for allocated DMA handle
gfp_t gfp
Allocation flags

Description

Managed dma_alloc_non_coherent(). Memory allocated using this function will be automatically released on driver detach.

Return

Pointer to allocated memory on success, NULL on failure.

void dmam_free_noncoherent(struct device * dev, size_t size, void * vaddr, dma_addr_t dma_handle)

Managed dma_free_noncoherent()

Parameters

struct device * dev
Device to free noncoherent memory for
size_t size
Size of allocation
void * vaddr
Virtual address of the memory to free
dma_addr_t dma_handle
DMA handle of the memory to free

Description

Managed dma_free_noncoherent().

int dmam_declare_coherent_memory(struct device * dev, phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags)

Managed dma_declare_coherent_memory()

Parameters

struct device * dev
Device to declare coherent memory for
phys_addr_t phys_addr
Physical address of coherent memory to be declared
dma_addr_t device_addr
Device address of coherent memory to be declared
size_t size
Size of coherent memory to be declared
int flags
Flags

Description

Managed dma_declare_coherent_memory().

Return

0 on success, -errno on failure.

void dmam_release_declared_memory(struct device * dev)

Managed dma_release_declared_memory().

Parameters

struct device * dev
Device to release declared coherent memory for

Description

Managed dmam_release_declared_memory().

Device Drivers Power Management

void dpm_resume_start(pm_message_t state)

Execute “noirq” and “early” device callbacks.

Parameters

pm_message_t state
PM transition of the system being carried out.
void dpm_resume_end(pm_message_t state)

Execute “resume” callbacks and complete system transition.

Parameters

pm_message_t state
PM transition of the system being carried out.

Description

Execute “resume” callbacks for all devices and complete the PM transition of the system.

int dpm_suspend_end(pm_message_t state)

Execute “late” and “noirq” device suspend callbacks.

Parameters

pm_message_t state
PM transition of the system being carried out.
int dpm_suspend_start(pm_message_t state)

Prepare devices for PM transition and suspend them.

Parameters

pm_message_t state
PM transition of the system being carried out.

Description

Prepare all non-sysdev devices for system PM transition and execute “suspend” callbacks for them.

int device_pm_wait_for_dev(struct device * subordinate, struct device * dev)

Wait for suspend/resume of a device to complete.

Parameters

struct device * subordinate
Device that needs to wait for dev.
struct device * dev
Device to wait for.
void dpm_for_each_dev(void * data, void (*fn) (struct device *, void *)

device iterator.

Parameters

void * data
data for the callback.
void (*)(struct device *, void *) fn
function to be called for each device.

Description

Iterate over devices in dpm_list, and call fn for each device, passing it data.

Device Drivers ACPI Support

int acpi_bus_scan(acpi_handle handle)

Add ACPI device node objects in a given namespace scope.

Parameters

acpi_handle handle
Root of the namespace scope to scan.

Description

Scan a given ACPI tree (probably recently hot-plugged) and create and add found devices.

If no devices were found, -ENODEV is returned, but it does not mean that there has been a real error. There just have been no suitable ACPI objects in the table trunk from which the kernel could create a device and add an appropriate driver.

Must be called under acpi_scan_lock.

void acpi_bus_trim(struct acpi_device * adev)

Detach scan handlers and drivers from ACPI device objects.

Parameters

struct acpi_device * adev
Root of the ACPI namespace scope to walk.

Description

Must be called under acpi_scan_lock.

void acpi_scan_drop_device(acpi_handle handle, void * context)

Drop an ACPI device object.

Parameters

acpi_handle handle
Handle of an ACPI namespace node, not used.
void * context
Address of the ACPI device object to drop.

Description

This is invoked by acpi_ns_delete_node() during the removal of the ACPI namespace node the device object pointed to by context is attached to.

The unregistration is carried out asynchronously to avoid running acpi_device_del() under the ACPICA’s namespace mutex and the list is used to ensure the correct ordering (the device objects must be unregistered in the same order in which the corresponding namespace nodes are deleted).

bool acpi_dma_supported(struct acpi_device * adev)

Check DMA support for the specified device.

Parameters

struct acpi_device * adev
The pointer to acpi device

Description

Return false if DMA is not supported. Otherwise, return true

enum dev_dma_attr acpi_get_dma_attr(struct acpi_device * adev)

Check the supported DMA attr for the specified device.

Parameters

struct acpi_device * adev
The pointer to acpi device

Description

Return enum dev_dma_attr.

Device drivers PnP support

int pnp_register_protocol(struct pnp_protocol * protocol)

adds a pnp protocol to the pnp layer

Parameters

struct pnp_protocol * protocol
pointer to the corresponding pnp_protocol structure

Description

Ex protocols: ISAPNP, PNPBIOS, etc
void pnp_unregister_protocol(struct pnp_protocol * protocol)

removes a pnp protocol from the pnp layer

Parameters

struct pnp_protocol * protocol
pointer to the corresponding pnp_protocol structure
struct pnp_dev * pnp_request_card_device(struct pnp_card_link * clink, const char * id, struct pnp_dev * from)

Searches for a PnP device under the specified card

Parameters

struct pnp_card_link * clink
pointer to the card link, cannot be NULL
const char * id
pointer to a PnP ID structure that explains the rules for finding the device
struct pnp_dev * from
Starting place to search from. If NULL it will start from the beginning.
void pnp_release_card_device(struct pnp_dev * dev)

call this when the driver no longer needs the device

Parameters

struct pnp_dev * dev
pointer to the PnP device structure
int pnp_register_card_driver(struct pnp_card_driver * drv)

registers a PnP card driver with the PnP Layer

Parameters

struct pnp_card_driver * drv
pointer to the driver to register
void pnp_unregister_card_driver(struct pnp_card_driver * drv)

unregisters a PnP card driver from the PnP Layer

Parameters

struct pnp_card_driver * drv
pointer to the driver to unregister
struct pnp_id * pnp_add_id(struct pnp_dev * dev, const char * id)

adds an EISA id to the specified device

Parameters

struct pnp_dev * dev
pointer to the desired device
const char * id
pointer to an EISA id string
int pnp_start_dev(struct pnp_dev * dev)

low-level start of the PnP device

Parameters

struct pnp_dev * dev
pointer to the desired device

Description

assumes that resources have already been allocated

int pnp_stop_dev(struct pnp_dev * dev)

low-level disable of the PnP device

Parameters

struct pnp_dev * dev
pointer to the desired device

Description

does not free resources

int pnp_activate_dev(struct pnp_dev * dev)

activates a PnP device for use

Parameters

struct pnp_dev * dev
pointer to the desired device

Description

does not validate or set resources so be careful.

int pnp_disable_dev(struct pnp_dev * dev)

disables device

Parameters

struct pnp_dev * dev
pointer to the desired device

Description

inform the correct pnp protocol so that resources can be used by other devices

int pnp_is_active(struct pnp_dev * dev)

Determines if a device is active based on its current resources

Parameters

struct pnp_dev * dev
pointer to the desired PnP device

Userspace IO devices

void uio_event_notify(struct uio_info * info)

trigger an interrupt event

Parameters

struct uio_info * info
UIO device capabilities
int __uio_register_device(struct module * owner, struct device * parent, struct uio_info * info)

register a new userspace IO device

Parameters

struct module * owner
module that creates the new device
struct device * parent
parent device
struct uio_info * info
UIO device capabilities

Description

returns zero on success or a negative error code.

void uio_unregister_device(struct uio_info * info)

unregister a industrial IO device

Parameters

struct uio_info * info
UIO device capabilities
struct uio_mem

description of a UIO memory region

Definition

struct uio_mem {
  const char * name;
  phys_addr_t addr;
  resource_size_t size;
  int memtype;
  void __iomem * internal_addr;
  struct uio_map * map;
};

Members

const char * name
name of the memory region for identification
phys_addr_t addr
address of the device’s memory (phys_addr is used since addr can be logical, virtual, or physical & phys_addr_t should always be large enough to handle any of the address types)
resource_size_t size
size of IO
int memtype
type of memory addr points to
void __iomem * internal_addr
ioremap-ped version of addr, for driver internal use
struct uio_map * map
for use by the UIO core only.
struct uio_port

description of a UIO port region

Definition

struct uio_port {
  const char * name;
  unsigned long start;
  unsigned long size;
  int porttype;
  struct uio_portio * portio;
};

Members

const char * name
name of the port region for identification
unsigned long start
start of port region
unsigned long size
size of port region
int porttype
type of port (see UIO_PORT_* below)
struct uio_portio * portio
for use by the UIO core only.
struct uio_info

UIO device capabilities

Definition

struct uio_info {
  struct uio_device * uio_dev;
  const char * name;
  const char * version;
  struct uio_mem mem[MAX_UIO_MAPS];
  struct uio_port port[MAX_UIO_PORT_REGIONS];
  long irq;
  unsigned long irq_flags;
  void * priv;
  irqreturn_t (* handler) (int irq, struct uio_info *dev_info);
  int (* mmap) (struct uio_info *info, struct vm_area_struct *vma);
  int (* open) (struct uio_info *info, struct inode *inode);
  int (* release) (struct uio_info *info, struct inode *inode);
  int (* irqcontrol) (struct uio_info *info, s32 irq_on);
};

Members

struct uio_device * uio_dev
the UIO device this info belongs to
const char * name
device name
const char * version
device driver version
struct uio_mem mem[MAX_UIO_MAPS]
list of mappable memory regions, size==0 for end of list
struct uio_port port[MAX_UIO_PORT_REGIONS]
list of port regions, size==0 for end of list
long irq
interrupt number or UIO_IRQ_CUSTOM
unsigned long irq_flags
flags for request_irq()
void * priv
optional private data
irqreturn_t (*)(int irq, struct uio_info *dev_info) handler
the device’s irq handler
int (*)(struct uio_info *info, struct vm_area_struct *vma) mmap
mmap operation for this uio device
int (*)(struct uio_info *info, struct inode *inode) open
open operation for this uio device
int (*)(struct uio_info *info, struct inode *inode) release
release operation for this uio device
int (*)(struct uio_info *info, s32 irq_on) irqcontrol
disable/enable irqs when 0/1 is written to /dev/uioX

Parallel Port Devices

int parport_yield(struct pardevice * dev)

relinquish a parallel port temporarily

Parameters

struct pardevice * dev
a device on the parallel port

Description

This function relinquishes the port if it would be helpful to other drivers to do so. Afterwards it tries to reclaim the port using parport_claim(), and the return value is the same as for parport_claim(). If it fails, the port is left unclaimed and it is the driver’s responsibility to reclaim the port.

The parport_yield() and parport_yield_blocking() functions are for marking points in the driver at which other drivers may claim the port and use their devices. Yielding the port is similar to releasing it and reclaiming it, but is more efficient because no action is taken if there are no other devices needing the port. In fact, nothing is done even if there are other devices waiting but the current device is still within its “timeslice”. The default timeslice is half a second, but it can be adjusted via the /proc interface.

int parport_yield_blocking(struct pardevice * dev)

relinquish a parallel port temporarily

Parameters

struct pardevice * dev
a device on the parallel port

Description

This function relinquishes the port if it would be helpful to other drivers to do so. Afterwards it tries to reclaim the port using parport_claim_or_block(), and the return value is the same as for parport_claim_or_block().

int parport_wait_event(struct parport * port, signed long timeout)

wait for an event on a parallel port

Parameters

struct parport * port
port to wait on
signed long timeout
time to wait (in jiffies)

Description

This function waits for up to timeout jiffies for an interrupt to occur on a parallel port. If the port timeout is set to zero, it returns immediately.

If an interrupt occurs before the timeout period elapses, this function returns zero immediately. If it times out, it returns one. An error code less than zero indicates an error (most likely a pending signal), and the calling code should finish what it’s doing as soon as it can.

int parport_wait_peripheral(struct parport * port, unsigned char mask, unsigned char result)

wait for status lines to change in 35ms

Parameters

struct parport * port
port to watch
unsigned char mask
status lines to watch
unsigned char result
desired values of chosen status lines

Description

This function waits until the masked status lines have the desired values, or until 35ms have elapsed (see IEEE 1284-1994 page 24 to 25 for why this value in particular is hardcoded). The mask and result parameters are bitmasks, with the bits defined by the constants in parport.h: PARPORT_STATUS_BUSY, and so on.

The port is polled quickly to start off with, in anticipation of a fast response from the peripheral. This fast polling time is configurable (using /proc), and defaults to 500usec. If the timeout for this port (see parport_set_timeout()) is zero, the fast polling time is 35ms, and this function does not call schedule().

If the timeout for this port is non-zero, after the fast polling fails it uses parport_wait_event() to wait for up to 10ms, waking up if an interrupt occurs.

int parport_negotiate(struct parport * port, int mode)

negotiate an IEEE 1284 mode

Parameters

struct parport * port
port to use
int mode
mode to negotiate to

Description

Use this to negotiate to a particular IEEE 1284 transfer mode. The mode parameter should be one of the constants in parport.h starting IEEE1284_MODE_xxx.

The return value is 0 if the peripheral has accepted the negotiation to the mode specified, -1 if the peripheral is not IEEE 1284 compliant (or not present), or 1 if the peripheral has rejected the negotiation.

ssize_t parport_write(struct parport * port, const void * buffer, size_t len)

write a block of data to a parallel port

Parameters

struct parport * port
port to write to
const void * buffer
data buffer (in kernel space)
size_t len
number of bytes of data to transfer

Description

This will write up to len bytes of buffer to the port specified, using the IEEE 1284 transfer mode most recently negotiated to (using parport_negotiate()), as long as that mode supports forward transfers (host to peripheral).

It is the caller’s responsibility to ensure that the first len bytes of buffer are valid.

This function returns the number of bytes transferred (if zero or positive), or else an error code.

ssize_t parport_read(struct parport * port, void * buffer, size_t len)

read a block of data from a parallel port

Parameters

struct parport * port
port to read from
void * buffer
data buffer (in kernel space)
size_t len
number of bytes of data to transfer

Description

This will read up to len bytes of buffer to the port specified, using the IEEE 1284 transfer mode most recently negotiated to (using parport_negotiate()), as long as that mode supports reverse transfers (peripheral to host).

It is the caller’s responsibility to ensure that the first len bytes of buffer are available to write to.

This function returns the number of bytes transferred (if zero or positive), or else an error code.

long parport_set_timeout(struct pardevice * dev, long inactivity)

set the inactivity timeout for a device

Parameters

struct pardevice * dev
device on a port
long inactivity
inactivity timeout (in jiffies)

Description

This sets the inactivity timeout for a particular device on a port. This affects functions like parport_wait_peripheral(). The special value 0 means not to call schedule() while dealing with this device.

The return value is the previous inactivity timeout.

Any callers of parport_wait_event() for this device are woken up.

int __parport_register_driver(struct parport_driver * drv, struct module * owner, const char * mod_name)

register a parallel port device driver

Parameters

struct parport_driver * drv
structure describing the driver
struct module * owner
owner module of drv
const char * mod_name
module name string

Description

This can be called by a parallel port device driver in order to receive notifications about ports being found in the system, as well as ports no longer available.

If devmodel is true then the new device model is used for registration.

The drv structure is allocated by the caller and must not be deallocated until after calling parport_unregister_driver().

If using the non device model: The driver’s attach() function may block. The port that attach() is given will be valid for the duration of the callback, but if the driver wants to take a copy of the pointer it must call parport_get_port() to do so. Calling parport_register_device() on that port will do this for you.

The driver’s detach() function may block. The port that detach() is given will be valid for the duration of the callback, but if the driver wants to take a copy of the pointer it must call parport_get_port() to do so.

Returns 0 on success. The non device model will always succeeds. but the new device model can fail and will return the error code.

void parport_unregister_driver(struct parport_driver * drv)

deregister a parallel port device driver

Parameters

struct parport_driver * drv
structure describing the driver that was given to parport_register_driver()

Description

This should be called by a parallel port device driver that has registered itself using parport_register_driver() when it is about to be unloaded.

When it returns, the driver’s attach() routine will no longer be called, and for each port that attach() was called for, the detach() routine will have been called.

All the driver’s attach() and detach() calls are guaranteed to have finished by the time this function returns.

struct parport * parport_get_port(struct parport * port)

increment a port’s reference count

Parameters

struct parport * port
the port

Description

This ensures that a struct parport pointer remains valid until the matching parport_put_port() call.
void parport_put_port(struct parport * port)

decrement a port’s reference count

Parameters

struct parport * port
the port

Description

This should be called once for each call to parport_get_port(), once the port is no longer needed. When the reference count reaches zero (port is no longer used), free_port is called.
struct parport * parport_register_port(unsigned long base, int irq, int dma, struct parport_operations * ops)

register a parallel port

Parameters

unsigned long base
base I/O address
int irq
IRQ line
int dma
DMA channel
struct parport_operations * ops
pointer to the port driver’s port operations structure

Description

When a parallel port (lowlevel) driver finds a port that should be made available to parallel port device drivers, it should call parport_register_port(). The base, irq, and dma parameters are for the convenience of port drivers, and for ports where they aren’t meaningful needn’t be set to anything special. They can be altered afterwards by adjusting the relevant members of the parport structure that is returned and represents the port. They should not be tampered with after calling parport_announce_port, however.

If there are parallel port device drivers in the system that have registered themselves using parport_register_driver(), they are not told about the port at this time; that is done by parport_announce_port().

The ops structure is allocated by the caller, and must not be deallocated before calling parport_remove_port().

If there is no memory to allocate a new parport structure, this function will return NULL.

void parport_announce_port(struct parport * port)

tell device drivers about a parallel port

Parameters

struct parport * port
parallel port to announce

Description

After a port driver has registered a parallel port with parport_register_port, and performed any necessary initialisation or adjustments, it should call parport_announce_port() in order to notify all device drivers that have called parport_register_driver(). Their attach() functions will be called, with port as the parameter.
void parport_remove_port(struct parport * port)

deregister a parallel port

Parameters

struct parport * port
parallel port to deregister

Description

When a parallel port driver is forcibly unloaded, or a parallel port becomes inaccessible, the port driver must call this function in order to deal with device drivers that still want to use it.

The parport structure associated with the port has its operations structure replaced with one containing ‘null’ operations that return errors or just don’t do anything.

Any drivers that have registered themselves using parport_register_driver() are notified that the port is no longer accessible by having their detach() routines called with port as the parameter.

struct pardevice * parport_register_device(struct parport * port, const char * name, int (*pf) (void *, void (*kf) (void *, void (*irq_func) (void *, int flags, void * handle)

register a device on a parallel port

Parameters

struct parport * port
port to which the device is attached
const char * name
a name to refer to the device
int (*)(void *) pf
preemption callback
void (*)(void *) kf
kick callback (wake-up)
void (*)(void *) irq_func
interrupt handler
int flags
registration flags
void * handle
data for callback functions

Description

This function, called by parallel port device drivers, declares that a device is connected to a port, and tells the system all it needs to know.

The name is allocated by the caller and must not be deallocated until the caller calls parport_unregister_device for that device.

The preemption callback function, pf, is called when this device driver has claimed access to the port but another device driver wants to use it. It is given handle as its parameter, and should return zero if it is willing for the system to release the port to another driver on its behalf. If it wants to keep control of the port it should return non-zero, and no action will be taken. It is good manners for the driver to try to release the port at the earliest opportunity after its preemption callback rejects a preemption attempt. Note that if a preemption callback is happy for preemption to go ahead, there is no need to release the port; it is done automatically. This function may not block, as it may be called from interrupt context. If the device driver does not support preemption, pf can be NULL.

The wake-up (“kick”) callback function, kf, is called when the port is available to be claimed for exclusive access; that is, parport_claim() is guaranteed to succeed when called from inside the wake-up callback function. If the driver wants to claim the port it should do so; otherwise, it need not take any action. This function may not block, as it may be called from interrupt context. If the device driver does not want to be explicitly invited to claim the port in this way, kf can be NULL.

The interrupt handler, irq_func, is called when an interrupt arrives from the parallel port. Note that if a device driver wants to use interrupts it should use parport_enable_irq(), and can also check the irq member of the parport structure representing the port.

The parallel port (lowlevel) driver is the one that has called request_irq() and whose interrupt handler is called first. This handler does whatever needs to be done to the hardware to acknowledge the interrupt (for PC-style ports there is nothing special to be done). It then tells the IEEE 1284 code about the interrupt, which may involve reacting to an IEEE 1284 event depending on the current IEEE 1284 phase. After this, it calls irq_func. Needless to say, irq_func will be called from interrupt context, and may not block.

The PARPORT_DEV_EXCL flag is for preventing port sharing, and so should only be used when sharing the port with other device drivers is impossible and would lead to incorrect behaviour. Use it sparingly! Normally, flags will be zero.

This function returns a pointer to a structure that represents the device on the port, or NULL if there is not enough memory to allocate space for that structure.

void parport_unregister_device(struct pardevice * dev)

deregister a device on a parallel port

Parameters

struct pardevice * dev
pointer to structure representing device

Description

This undoes the effect of parport_register_device().
struct parport * parport_find_number(int number)

find a parallel port by number

Parameters

int number
parallel port number

Description

This returns the parallel port with the specified number, or NULL if there is none.

There is an implicit parport_get_port() done already; to throw away the reference to the port that parport_find_number() gives you, use parport_put_port().

struct parport * parport_find_base(unsigned long base)

find a parallel port by base address

Parameters

unsigned long base
base I/O address

Description

This returns the parallel port with the specified base address, or NULL if there is none.

There is an implicit parport_get_port() done already; to throw away the reference to the port that parport_find_base() gives you, use parport_put_port().

int parport_claim(struct pardevice * dev)

claim access to a parallel port device

Parameters

struct pardevice * dev
pointer to structure representing a device on the port

Description

This function will not block and so can be used from interrupt context. If parport_claim() succeeds in claiming access to the port it returns zero and the port is available to use. It may fail (returning non-zero) if the port is in use by another driver and that driver is not willing to relinquish control of the port.
int parport_claim_or_block(struct pardevice * dev)

claim access to a parallel port device

Parameters

struct pardevice * dev
pointer to structure representing a device on the port

Description

This behaves like parport_claim(), but will block if necessary to wait for the port to be free. A return value of 1 indicates that it slept; 0 means that it succeeded without needing to sleep. A negative error code indicates failure.
void parport_release(struct pardevice * dev)

give up access to a parallel port device

Parameters

struct pardevice * dev
pointer to structure representing parallel port device

Description

This function cannot fail, but it should not be called without the port claimed. Similarly, if the port is already claimed you should not try claiming it again.
struct pardevice * parport_open(int devnum, const char * name)

find a device by canonical device number

Parameters

int devnum
canonical device number
const char * name
name to associate with the device

Description

This function is similar to parport_register_device(), except that it locates a device by its number rather than by the port it is attached to.

All parameters except for devnum are the same as for parport_register_device(). The return value is the same as for parport_register_device().

void parport_close(struct pardevice * dev)

close a device opened with parport_open()

Parameters

struct pardevice * dev
device to close

Description

Message-based devices

Fusion message devices

u8 mpt_register(MPT_CALLBACK cbfunc, MPT_DRIVER_CLASS dclass, char * func_name)

Register protocol-specific main callback handler.

Parameters

MPT_CALLBACK cbfunc
callback function pointer
MPT_DRIVER_CLASS dclass
Protocol driver’s class (MPT_DRIVER_CLASS enum value)
char * func_name
call function’s name

Description

This routine is called by a protocol-specific driver (SCSI host, LAN, SCSI target) to register its reply callback routine. Each protocol-specific driver must do this before it will be able to use any IOC resources, such as obtaining request frames.

NOTES

The SCSI protocol driver currently calls this routine thrice

in order to register separate callbacks; one for “normal” SCSI IO; one for MptScsiTaskMgmt requests; one for Scan/DV requests.

Returns u8 valued “handle” in the range (and S.O.D. order) {N,...,7,6,5,...,1} if successful. A return value of MPT_MAX_PROTOCOL_DRIVERS (including zero!) should be considered an error by the caller.

void mpt_deregister(u8 cb_idx)

Deregister a protocol drivers resources.

Parameters

u8 cb_idx
previously registered callback handle

Description

Each protocol-specific driver should call this routine when its module is unloaded.
int mpt_event_register(u8 cb_idx, MPT_EVHANDLER ev_cbfunc)

Register protocol-specific event callback handler.

Parameters

u8 cb_idx
previously registered (via mpt_register) callback handle
MPT_EVHANDLER ev_cbfunc
callback function

Description

This routine can be called by one or more protocol-specific drivers if/when they choose to be notified of MPT events.

Returns 0 for success.

void mpt_event_deregister(u8 cb_idx)

Deregister protocol-specific event callback handler

Parameters

u8 cb_idx
previously registered callback handle

Description

Each protocol-specific driver should call this routine when it does not (or can no longer) handle events, or when its module is unloaded.
int mpt_reset_register(u8 cb_idx, MPT_RESETHANDLER reset_func)

Register protocol-specific IOC reset handler.

Parameters

u8 cb_idx
previously registered (via mpt_register) callback handle
MPT_RESETHANDLER reset_func
reset function

Description

This routine can be called by one or more protocol-specific drivers if/when they choose to be notified of IOC resets.

Returns 0 for success.

void mpt_reset_deregister(u8 cb_idx)

Deregister protocol-specific IOC reset handler.

Parameters

u8 cb_idx
previously registered callback handle

Description

Each protocol-specific driver should call this routine when it does not (or can no longer) handle IOC reset handling, or when its module is unloaded.
int mpt_device_driver_register(struct mpt_pci_driver * dd_cbfunc, u8 cb_idx)

Register device driver hooks

Parameters

struct mpt_pci_driver * dd_cbfunc
driver callbacks struct
u8 cb_idx
MPT protocol driver index
void mpt_device_driver_deregister(u8 cb_idx)

DeRegister device driver hooks

Parameters

u8 cb_idx
MPT protocol driver index
MPT_FRAME_HDR* mpt_get_msg_frame(u8 cb_idx, MPT_ADAPTER * ioc)

Obtain an MPT request frame from the pool

Parameters

u8 cb_idx
Handle of registered MPT protocol driver
MPT_ADAPTER * ioc
Pointer to MPT adapter structure

Description

Obtain an MPT request frame from the pool (of 1024) that are allocated per MPT adapter.

Returns pointer to a MPT request frame or NULL if none are available or IOC is not active.

void mpt_put_msg_frame(u8 cb_idx, MPT_ADAPTER * ioc, MPT_FRAME_HDR * mf)

Send a protocol-specific MPT request frame to an IOC

Parameters

u8 cb_idx
Handle of registered MPT protocol driver
MPT_ADAPTER * ioc
Pointer to MPT adapter structure
MPT_FRAME_HDR * mf
Pointer to MPT request frame

Description

This routine posts an MPT request frame to the request post FIFO of a specific MPT adapter.
void mpt_put_msg_frame_hi_pri(u8 cb_idx, MPT_ADAPTER * ioc, MPT_FRAME_HDR * mf)

Send a hi-pri protocol-specific MPT request frame

Parameters

u8 cb_idx
Handle of registered MPT protocol driver
MPT_ADAPTER * ioc
Pointer to MPT adapter structure
MPT_FRAME_HDR * mf
Pointer to MPT request frame

Description

Send a protocol-specific MPT request frame to an IOC using hi-priority request queue.

This routine posts an MPT request frame to the request post FIFO of a specific MPT adapter.

void mpt_free_msg_frame(MPT_ADAPTER * ioc, MPT_FRAME_HDR * mf)

Place MPT request frame back on FreeQ.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter structure
MPT_FRAME_HDR * mf
Pointer to MPT request frame

Description

This routine places a MPT request frame back on the MPT adapter’s FreeQ.
int mpt_send_handshake_request(u8 cb_idx, MPT_ADAPTER * ioc, int reqBytes, u32 * req, int sleepFlag)

Send MPT request via doorbell handshake method.

Parameters

u8 cb_idx
Handle of registered MPT protocol driver
MPT_ADAPTER * ioc
Pointer to MPT adapter structure
int reqBytes
Size of the request in bytes
u32 * req
Pointer to MPT request frame
int sleepFlag
Use schedule if CAN_SLEEP else use udelay.

Description

This routine is used exclusively to send MptScsiTaskMgmt requests since they are required to be sent via doorbell handshake.

NOTE

It is the callers responsibility to byte-swap fields in the

request which are greater than 1 byte in size.

Returns 0 for success, non-zero for failure.

int mpt_verify_adapter(int iocid, MPT_ADAPTER ** iocpp)

Given IOC identifier, set pointer to its adapter structure.

Parameters

int iocid
IOC unique identifier (integer)
MPT_ADAPTER ** iocpp
Pointer to pointer to IOC adapter

Description

Given a unique IOC identifier, set pointer to the associated MPT adapter structure.

Returns iocid and sets iocpp if iocid is found. Returns -1 if iocid is not found.

int mpt_attach(struct pci_dev * pdev, const struct pci_device_id * id)

Install a PCI intelligent MPT adapter.

Parameters

struct pci_dev * pdev
Pointer to pci_dev structure
const struct pci_device_id * id
PCI device ID information

Description

This routine performs all the steps necessary to bring the IOC of a MPT adapter to a OPERATIONAL state. This includes registering memory regions, registering the interrupt, and allocating request and reply memory pools.

This routine also pre-fetches the LAN MAC address of a Fibre Channel MPT adapter.

Returns 0 for success, non-zero for failure.

TODO: Add support for polled controllers

void mpt_detach(struct pci_dev * pdev)

Remove a PCI intelligent MPT adapter.

Parameters

struct pci_dev * pdev
Pointer to pci_dev structure
int mpt_suspend(struct pci_dev * pdev, pm_message_t state)

Fusion MPT base driver suspend routine.

Parameters

struct pci_dev * pdev
Pointer to pci_dev structure
pm_message_t state
new state to enter
int mpt_resume(struct pci_dev * pdev)

Fusion MPT base driver resume routine.

Parameters

struct pci_dev * pdev
Pointer to pci_dev structure
u32 mpt_GetIocState(MPT_ADAPTER * ioc, int cooked)

Get the current state of a MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int cooked
Request raw or cooked IOC state

Description

Returns all IOC Doorbell register bits if cooked==0, else just the Doorbell bits in MPI_IOC_STATE_MASK.
int mpt_alloc_fw_memory(MPT_ADAPTER * ioc, int size)

allocate firmware memory

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int size
total FW bytes

Description

If memory has already been allocated, the same (cached) value is returned.

Return 0 if successful, or non-zero for failure

void mpt_free_fw_memory(MPT_ADAPTER * ioc)

free firmware memory

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure

Description

If alt_img is NULL, delete from ioc structure. Else, delete a secondary image in same format.
int mptbase_sas_persist_operation(MPT_ADAPTER * ioc, u8 persist_opcode)

Perform operation on SAS Persistent Table

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u8 persist_opcode
see below

Description

MPI_SAS_OP_CLEAR_NOT_PRESENT - Free all persist TargetID mappings for
devices not currently present.

MPI_SAS_OP_CLEAR_ALL_PERSISTENT - Clear al persist TargetID mappings

NOTE

Don’t use not this function during interrupt time.

Returns 0 for success, non-zero error
int mpt_raid_phys_disk_pg0(MPT_ADAPTER * ioc, u8 phys_disk_num, RaidPhysDiskPage0_t * phys_disk)

returns phys disk page zero

Parameters

MPT_ADAPTER * ioc
Pointer to a Adapter Structure
u8 phys_disk_num
io unit unique phys disk num generated by the ioc
RaidPhysDiskPage0_t * phys_disk
requested payload data returned

Return

0 on success -EFAULT if read of config page header fails or data pointer not NULL -ENOMEM if pci_alloc failed
int mpt_raid_phys_disk_get_num_paths(MPT_ADAPTER * ioc, u8 phys_disk_num)

returns number paths associated to this phys_num

Parameters

MPT_ADAPTER * ioc
Pointer to a Adapter Structure
u8 phys_disk_num
io unit unique phys disk num generated by the ioc

Return

returns number paths
int mpt_raid_phys_disk_pg1(MPT_ADAPTER * ioc, u8 phys_disk_num, RaidPhysDiskPage1_t * phys_disk)

returns phys disk page 1

Parameters

MPT_ADAPTER * ioc
Pointer to a Adapter Structure
u8 phys_disk_num
io unit unique phys disk num generated by the ioc
RaidPhysDiskPage1_t * phys_disk
requested payload data returned

Return

0 on success -EFAULT if read of config page header fails or data pointer not NULL -ENOMEM if pci_alloc failed
int mpt_findImVolumes(MPT_ADAPTER * ioc)

Identify IDs of hidden disks and RAID Volumes

Parameters

MPT_ADAPTER * ioc
Pointer to a Adapter Strucutre

Return

0 on success -EFAULT if read of config page header fails or data pointer not NULL -ENOMEM if pci_alloc failed
int mpt_config(MPT_ADAPTER * ioc, CONFIGPARMS * pCfg)

Generic function to issue config message

Parameters

MPT_ADAPTER * ioc
Pointer to an adapter structure
CONFIGPARMS * pCfg
Pointer to a configuration structure. Struct contains action, page address, direction, physical address and pointer to a configuration page header Page header is updated.

Description

Returns 0 for success -EPERM if not allowed due to ISR context -EAGAIN if no msg frames currently available -EFAULT for non-successful reply or no reply (timeout)
void mpt_print_ioc_summary(MPT_ADAPTER * ioc, char * buffer, int * size, int len, int showlan)

Write ASCII summary of IOC to a buffer.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
char * buffer
Pointer to buffer where IOC summary info should be written
int * size
Pointer to number of bytes we wrote (set by this routine)
int len
Offset at which to start writing in buffer
int showlan
Display LAN stuff?

Description

This routine writes (english readable) ASCII text, which represents a summary of IOC information, to a buffer.
int mpt_set_taskmgmt_in_progress_flag(MPT_ADAPTER * ioc)

set flags associated with task management

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure

Description

Returns 0 for SUCCESS or -1 if FAILED.

If -1 is return, then it was not possible to set the flags

void mpt_clear_taskmgmt_in_progress_flag(MPT_ADAPTER * ioc)

clear flags associated with task management

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
void mpt_halt_firmware(MPT_ADAPTER * ioc)

Halts the firmware if it is operational and panic the kernel

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int mpt_Soft_Hard_ResetHandler(MPT_ADAPTER * ioc, int sleepFlag)

Try less expensive reset

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int sleepFlag
Indicates if sleep or schedule must be called.

Description

Returns 0 for SUCCESS or -1 if FAILED. Try for softreset first, only if it fails go for expensive HardReset.
int mpt_HardResetHandler(MPT_ADAPTER * ioc, int sleepFlag)

Generic reset handler

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int sleepFlag
Indicates if sleep or schedule must be called.

Description

Issues SCSI Task Management call based on input arg values. If TaskMgmt fails, returns associated SCSI request.

Remark: _HardResetHandler can be invoked from an interrupt thread (timer) or a non-interrupt thread. In the former, must not call schedule().

Note

A return of -1 is a FATAL error case, as it means a

FW reload/initialization failed.

Returns 0 for SUCCESS or -1 if FAILED.

u8 mpt_get_cb_idx(MPT_DRIVER_CLASS dclass)

obtain cb_idx for registered driver

Parameters

MPT_DRIVER_CLASS dclass
class driver enum

Description

Returns cb_idx, or zero means it wasn’t found
int mpt_is_discovery_complete(MPT_ADAPTER * ioc)

determine if discovery has completed

Parameters

MPT_ADAPTER * ioc
per adatper instance

Description

Returns 1 when discovery completed, else zero.

int mpt_remove_dead_ioc_func(void * arg)

kthread context to remove dead ioc

Parameters

void * arg
input argument, used to derive ioc

Description

Return 0 if controller is removed from pci subsystem. Return -1 for other case.

void mpt_fault_reset_work(struct work_struct * work)

work performed on workq after ioc fault

Parameters

struct work_struct * work
input argument, used to derive ioc
irqreturn_t mpt_interrupt(int irq, void * bus_id)

MPT adapter (IOC) specific interrupt handler.

Parameters

int irq
irq number (not used)
void * bus_id
bus identifier cookie == pointer to MPT_ADAPTER structure

Description

This routine is registered via the request_irq() kernel API call, and handles all interrupts generated from a specific MPT adapter (also referred to as a IO Controller or IOC). This routine must clear the interrupt from the adapter and does so by reading the reply FIFO. Multiple replies may be processed per single call to this routine.

This routine handles register-level access of the adapter but dispatches (calls) a protocol-specific callback routine to handle the protocol-specific details of the MPT request completion.

int mptbase_reply(MPT_ADAPTER * ioc, MPT_FRAME_HDR * req, MPT_FRAME_HDR * reply)

MPT base driver’s callback routine

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
MPT_FRAME_HDR * req
Pointer to original MPT request frame
MPT_FRAME_HDR * reply
Pointer to MPT reply frame (NULL if TurboReply)

Description

MPT base driver’s callback routine; all base driver “internal” request/reply processing is routed here. Currently used for EventNotification and EventAck handling.

Returns 1 indicating original alloc’d request frame ptr should be freed, or 0 if it shouldn’t.

void mpt_add_sge(void * pAddr, u32 flagslength, dma_addr_t dma_addr)

Place a simple 32 bit SGE at address pAddr.

Parameters

void * pAddr
virtual address for SGE
u32 flagslength
SGE flags and data transfer length
dma_addr_t dma_addr
Physical address

Description

This routine places a MPT request frame back on the MPT adapter’s FreeQ.
void mpt_add_sge_64bit(void * pAddr, u32 flagslength, dma_addr_t dma_addr)

Place a simple 64 bit SGE at address pAddr.

Parameters

void * pAddr
virtual address for SGE
u32 flagslength
SGE flags and data transfer length
dma_addr_t dma_addr
Physical address

Description

This routine places a MPT request frame back on the MPT adapter’s FreeQ.
void mpt_add_sge_64bit_1078(void * pAddr, u32 flagslength, dma_addr_t dma_addr)

Place a simple 64 bit SGE at address pAddr (1078 workaround).

Parameters

void * pAddr
virtual address for SGE
u32 flagslength
SGE flags and data transfer length
dma_addr_t dma_addr
Physical address

Description

This routine places a MPT request frame back on the MPT adapter’s FreeQ.
void mpt_add_chain(void * pAddr, u8 next, u16 length, dma_addr_t dma_addr)

Place a 32 bit chain SGE at address pAddr.

Parameters

void * pAddr
virtual address for SGE
u8 next
nextChainOffset value (u32’s)
u16 length
length of next SGL segment
dma_addr_t dma_addr
Physical address
void mpt_add_chain_64bit(void * pAddr, u8 next, u16 length, dma_addr_t dma_addr)

Place a 64 bit chain SGE at address pAddr.

Parameters

void * pAddr
virtual address for SGE
u8 next
nextChainOffset value (u32’s)
u16 length
length of next SGL segment
dma_addr_t dma_addr
Physical address
int mpt_host_page_access_control(MPT_ADAPTER * ioc, u8 access_control_value, int sleepFlag)

control the IOC’s Host Page Buffer access

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter structure
u8 access_control_value
define bits below
int sleepFlag
Specifies whether the process can sleep

Description

Provides mechanism for the host driver to control the IOC’s Host Page Buffer access.

Access Control Value - bits[15:12] 0h Reserved 1h Enable Access { MPI_DB_HPBAC_ENABLE_ACCESS } 2h Disable Access { MPI_DB_HPBAC_DISABLE_ACCESS } 3h Free Buffer { MPI_DB_HPBAC_FREE_BUFFER }

Returns 0 for success, non-zero for failure.

int mpt_host_page_alloc(MPT_ADAPTER * ioc, pIOCInit_t ioc_init)

allocate system memory for the fw

Parameters

MPT_ADAPTER * ioc
Pointer to pointer to IOC adapter
pIOCInit_t ioc_init
Pointer to ioc init config page

Description

If we already allocated memory in past, then resend the same pointer. Returns 0 for success, non-zero for failure.
const char* mpt_get_product_name(u16 vendor, u16 device, u8 revision)

returns product string

Parameters

u16 vendor
pci vendor id
u16 device
pci device id
u8 revision
pci revision id

Description

Returns product string displayed when driver loads, in /proc/mpt/summary and /sysfs/class/scsi_host/host<X>/version_product
int mpt_mapresources(MPT_ADAPTER * ioc)

map in memory mapped io

Parameters

MPT_ADAPTER * ioc
Pointer to pointer to IOC adapter
int mpt_do_ioc_recovery(MPT_ADAPTER * ioc, u32 reason, int sleepFlag)

Initialize or recover MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter structure
u32 reason
Event word / reason
int sleepFlag
Use schedule if CAN_SLEEP else use udelay.

Description

This routine performs all the steps necessary to bring the IOC to a OPERATIONAL state.

This routine also pre-fetches the LAN MAC address of a Fibre Channel MPT adapter.

Return

0 for success

-1 if failed to get board READY -2 if READY but IOCFacts Failed -3 if READY but PrimeIOCFifos Failed -4 if READY but IOCInit Failed -5 if failed to enable_device and/or request_selected_regions -6 if failed to upload firmware

void mpt_detect_bound_ports(MPT_ADAPTER * ioc, struct pci_dev * pdev)

Search for matching PCI bus/dev_function

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter structure
struct pci_dev * pdev
Pointer to (struct pci_dev) structure

Description

Search for PCI bus/dev_function which matches PCI bus/dev_function (+/-1) for newly discovered 929, 929X, 1030 or 1035.

If match on PCI dev_function +/-1 is found, bind the two MPT adapters using alt_ioc pointer fields in their MPT_ADAPTER structures.

void mpt_adapter_disable(MPT_ADAPTER * ioc)

Disable misbehaving MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter structure
void mpt_adapter_dispose(MPT_ADAPTER * ioc)

Free all resources associated with an MPT adapter

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter structure

Description

This routine unregisters h/w resources and frees all alloc’d memory associated with a MPT adapter structure.
void MptDisplayIocCapabilities(MPT_ADAPTER * ioc)

Disply IOC’s capabilities.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter structure
int MakeIocReady(MPT_ADAPTER * ioc, int force, int sleepFlag)

Get IOC to a READY state, using KickStart if needed.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int force
Force hard KickStart of IOC
int sleepFlag
Specifies whether the process can sleep

Return

1 - DIAG reset and READY 0 - READY initially OR soft reset and READY

-1 - Any failure on KickStart -2 - Msg Unit Reset Failed -3 - IO Unit Reset Failed -4 - IOC owned by a PEER

int GetIocFacts(MPT_ADAPTER * ioc, int sleepFlag, int reason)

Send IOCFacts request to MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int sleepFlag
Specifies whether the process can sleep
int reason
If recovery, only update facts.

Description

Returns 0 for success, non-zero for failure.
int GetPortFacts(MPT_ADAPTER * ioc, int portnum, int sleepFlag)

Send PortFacts request to MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int portnum
Port number
int sleepFlag
Specifies whether the process can sleep

Description

Returns 0 for success, non-zero for failure.
int SendIocInit(MPT_ADAPTER * ioc, int sleepFlag)

Send IOCInit request to MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int sleepFlag
Specifies whether the process can sleep

Description

Send IOCInit followed by PortEnable to bring IOC to OPERATIONAL state.

Returns 0 for success, non-zero for failure.

int SendPortEnable(MPT_ADAPTER * ioc, int portnum, int sleepFlag)

Send PortEnable request to MPT adapter port.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int portnum
Port number to enable
int sleepFlag
Specifies whether the process can sleep

Description

Send PortEnable to bring IOC to OPERATIONAL state.

Returns 0 for success, non-zero for failure.

int mpt_do_upload(MPT_ADAPTER * ioc, int sleepFlag)

Construct and Send FWUpload request to MPT adapter port.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int sleepFlag
Specifies whether the process can sleep

Description

Returns 0 for success, >0 for handshake failure
<0 for fw upload failure.

Remark: If bound IOC and a successful FWUpload was performed on the bound IOC, the second image is discarded and memory is free’d. Both channels must upload to prevent IOC from running in degraded mode.

int mpt_downloadboot(MPT_ADAPTER * ioc, MpiFwHeader_t * pFwHeader, int sleepFlag)

DownloadBoot code

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
MpiFwHeader_t * pFwHeader
Pointer to firmware header info
int sleepFlag
Specifies whether the process can sleep

Description

FwDownloadBoot requires Programmed IO access.

Returns 0 for success
-1 FW Image size is 0 -2 No valid cached_fw Pointer <0 for fw upload failure.
int KickStart(MPT_ADAPTER * ioc, int force, int sleepFlag)

Perform hard reset of MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int force
Force hard reset
int sleepFlag
Specifies whether the process can sleep

Description

This routine places MPT adapter in diagnostic mode via the WriteSequence register, and then performs a hard reset of adapter via the Diagnostic register.

Inputs: sleepflag - CAN_SLEEP (non-interrupt thread)
or NO_SLEEP (interrupt thread, use mdelay)
force - 1 if doorbell active, board fault state
board operational, IOC_RECOVERY or IOC_BRINGUP and there is an alt_ioc.

0 else

Return

1 - hard reset, READY 0 - no reset due to History bit, READY
-1 - no reset due to History bit but not READY
OR reset but failed to come READY

-2 - no reset, could not enter DIAG mode -3 - reset but bad FW bit

int mpt_diag_reset(MPT_ADAPTER * ioc, int ignore, int sleepFlag)

Perform hard reset of the adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int ignore
Set if to honor and clear to ignore the reset history bit
int sleepFlag
CAN_SLEEP if called in a non-interrupt thread, else set to NO_SLEEP (use mdelay instead)

Description

This routine places the adapter in diagnostic mode via the WriteSequence register and then performs a hard reset of adapter via the Diagnostic register. Adapter should be in ready state upon successful completion.

Return

1 hard reset successful
0 no reset performed because reset history bit set
-2 enabling diagnostic mode failed
-3 diagnostic reset failed
int SendIocReset(MPT_ADAPTER * ioc, u8 reset_type, int sleepFlag)

Send IOCReset request to MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u8 reset_type
reset type, expected values are MPI_FUNCTION_IOC_MESSAGE_UNIT_RESET or MPI_FUNCTION_IO_UNIT_RESET
int sleepFlag
Specifies whether the process can sleep

Description

Send IOCReset request to the MPT adapter.

Returns 0 for success, non-zero for failure.

int initChainBuffers(MPT_ADAPTER * ioc)

Allocate memory for and initialize chain buffers

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure

Description

Allocates memory for and initializes chain buffers, chain buffer control arrays and spinlock.
int PrimeIocFifos(MPT_ADAPTER * ioc)

Initialize IOC request and reply FIFOs.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure

Description

This routine allocates memory for the MPT reply and request frame pools (if necessary), and primes the IOC reply FIFO with reply frames.

Returns 0 for success, non-zero for failure.

int mpt_handshake_req_reply_wait(MPT_ADAPTER * ioc, int reqBytes, u32 * req, int replyBytes, u16 * u16reply, int maxwait, int sleepFlag)

Send MPT request to and receive reply from IOC via doorbell handshake method.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int reqBytes
Size of the request in bytes
u32 * req
Pointer to MPT request frame
int replyBytes
Expected size of the reply in bytes
u16 * u16reply
Pointer to area where reply should be written
int maxwait
Max wait time for a reply (in seconds)
int sleepFlag
Specifies whether the process can sleep

NOTES

It is the callers responsibility to byte-swap fields in the

request which are greater than 1 byte in size. It is also the callers responsibility to byte-swap response fields which are greater than 1 byte in size.

Returns 0 for success, non-zero for failure.

int WaitForDoorbellAck(MPT_ADAPTER * ioc, int howlong, int sleepFlag)

Wait for IOC doorbell handshake acknowledge

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int howlong
How long to wait (in seconds)
int sleepFlag
Specifies whether the process can sleep

Description

This routine waits (up to ~2 seconds max) for IOC doorbell handshake ACKnowledge, indicated by the IOP_DOORBELL_STATUS bit in its IntStatus register being clear.

Returns a negative value on failure, else wait loop count.

int WaitForDoorbellInt(MPT_ADAPTER * ioc, int howlong, int sleepFlag)

Wait for IOC to set its doorbell interrupt bit

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int howlong
How long to wait (in seconds)
int sleepFlag
Specifies whether the process can sleep

Description

This routine waits (up to ~2 seconds max) for IOC doorbell interrupt (MPI_HIS_DOORBELL_INTERRUPT) to be set in the IntStatus register.

Returns a negative value on failure, else wait loop count.

int WaitForDoorbellReply(MPT_ADAPTER * ioc, int howlong, int sleepFlag)

Wait for and capture an IOC handshake reply.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int howlong
How long to wait (in seconds)
int sleepFlag
Specifies whether the process can sleep

Description

This routine polls the IOC for a handshake reply, 16 bits at a time. Reply is cached to IOC private area large enough to hold a maximum of 128 bytes of reply data.

Returns a negative value on failure, else size of reply in WORDS.

int GetLanConfigPages(MPT_ADAPTER * ioc)

Fetch LANConfig pages.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure

Return

0 for success
-ENOMEM if no memory available
-EPERM if not allowed due to ISR context -EAGAIN if no msg frames currently available -EFAULT for non-successful reply or no reply (timeout)
int GetIoUnitPage2(MPT_ADAPTER * ioc)

Retrieve BIOS version and boot order information.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure

Return

0 for success
-ENOMEM if no memory available
-EPERM if not allowed due to ISR context -EAGAIN if no msg frames currently available -EFAULT for non-successful reply or no reply (timeout)
int mpt_GetScsiPortSettings(MPT_ADAPTER * ioc, int portnum)

read SCSI Port Page 0 and 2

Parameters

MPT_ADAPTER * ioc
Pointer to a Adapter Strucutre
int portnum
IOC port number

Return

-EFAULT if read of config page header fails
or if no nvram
If read of SCSI Port Page 0 fails,
NVRAM = MPT_HOST_NVRAM_INVALID (0xFFFFFFFF) Adapter settings: async, narrow Return 1
If read of SCSI Port Page 2 fails,
Adapter settings valid NVRAM = MPT_HOST_NVRAM_INVALID (0xFFFFFFFF) Return 1
Else
Both valid Return 0

CHECK - what type of locking mechanisms should be used????

int mpt_readScsiDevicePageHeaders(MPT_ADAPTER * ioc, int portnum)

save version and length of SDP1

Parameters

MPT_ADAPTER * ioc
Pointer to a Adapter Strucutre
int portnum
IOC port number

Return

-EFAULT if read of config page header fails
or 0 if success.
void mpt_inactive_raid_list_free(MPT_ADAPTER * ioc)

This clears this link list.

Parameters

MPT_ADAPTER * ioc
pointer to per adapter structure
void mpt_inactive_raid_volumes(MPT_ADAPTER * ioc, u8 channel, u8 id)

sets up link list of phy_disk_nums for devices belonging in an inactive volume

Parameters

MPT_ADAPTER * ioc
pointer to per adapter structure
u8 channel
volume channel
u8 id
volume target id
int SendEventNotification(MPT_ADAPTER * ioc, u8 EvSwitch, int sleepFlag)

Send EventNotification (on or off) request to adapter

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u8 EvSwitch
Event switch flags
int sleepFlag
Specifies whether the process can sleep
int SendEventAck(MPT_ADAPTER * ioc, EventNotificationReply_t * evnp)

Send EventAck request to MPT adapter.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
EventNotificationReply_t * evnp
Pointer to original EventNotification request
int mpt_ioc_reset(MPT_ADAPTER * ioc, int reset_phase)

Base cleanup for hard reset

Parameters

MPT_ADAPTER * ioc
Pointer to the adapter structure
int reset_phase
Indicates pre- or post-reset functionality

Description

Remark: Frees resources with internally generated commands.
int procmpt_create(void)

Create MPT_PROCFS_MPTBASEDIR entries.

Parameters

void
no arguments

Description

Returns 0 for success, non-zero for failure.
void procmpt_destroy(void)

Tear down MPT_PROCFS_MPTBASEDIR entries.

Parameters

void
no arguments

Description

Returns 0 for success, non-zero for failure.
int mpt_SoftResetHandler(MPT_ADAPTER * ioc, int sleepFlag)

Issues a less expensive reset

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int sleepFlag
Indicates if sleep or schedule must be called.

Description

Returns 0 for SUCCESS or -1 if FAILED.

Message Unit Reset - instructs the IOC to reset the Reply Post and Free FIFO’s. All the Message Frames on Reply Free FIFO are discarded. All posted buffers are freed, and event notification is turned off. IOC doesn’t reply to any outstanding request. This will transfer IOC to READY state.

int ProcessEventNotification(MPT_ADAPTER * ioc, EventNotificationReply_t * pEventReply, int * evHandlers)

Route EventNotificationReply to all event handlers

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
EventNotificationReply_t * pEventReply
Pointer to EventNotification reply frame
int * evHandlers
Pointer to integer, number of event handlers

Description

Routes a received EventNotificationReply to all currently registered event handlers. Returns sum of event handlers return values.
void mpt_fc_log_info(MPT_ADAPTER * ioc, u32 log_info)

Log information returned from Fibre Channel IOC.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u32 log_info
U32 LogInfo reply word from the IOC

Description

Refer to lsi/mpi_log_fc.h.
void mpt_spi_log_info(MPT_ADAPTER * ioc, u32 log_info)

Log information returned from SCSI Parallel IOC.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u32 log_info
U32 LogInfo word from the IOC

Description

Refer to lsi/sp_log.h.
void mpt_sas_log_info(MPT_ADAPTER * ioc, u32 log_info, u8 cb_idx)

Log information returned from SAS IOC.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u32 log_info
U32 LogInfo reply word from the IOC
u8 cb_idx
callback function’s handle

Description

Refer to lsi/mpi_log_sas.h.
void mpt_iocstatus_info_config(MPT_ADAPTER * ioc, u32 ioc_status, MPT_FRAME_HDR * mf)

IOCSTATUS information for config pages

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u32 ioc_status
U32 IOCStatus word from IOC
MPT_FRAME_HDR * mf
Pointer to MPT request frame

Description

Refer to lsi/mpi.h.
void mpt_iocstatus_info(MPT_ADAPTER * ioc, u32 ioc_status, MPT_FRAME_HDR * mf)

IOCSTATUS information returned from IOC.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
u32 ioc_status
U32 IOCStatus word from IOC
MPT_FRAME_HDR * mf
Pointer to MPT request frame

Description

Refer to lsi/mpi.h.
int fusion_init(void)

Fusion MPT base driver initialization routine.

Parameters

void
no arguments

Description

Returns 0 for success, non-zero for failure.
void __exit fusion_exit(void)

Perform driver unload cleanup.

Parameters

void
no arguments

Description

This routine frees all resources associated with each MPT adapter and removes all MPT_PROCFS_MPTBASEDIR entries.
const char * mptscsih_info(struct Scsi_Host * SChost)

Return information about MPT adapter

Parameters

struct Scsi_Host * SChost
Pointer to Scsi_Host structure

Description

(linux scsi_host_template.info routine)

Returns pointer to buffer where information was written.

int mptscsih_qcmd(struct scsi_cmnd * SCpnt)

Primary Fusion MPT SCSI initiator IO start routine.

Parameters

struct scsi_cmnd * SCpnt
Pointer to scsi_cmnd structure

Description

(linux scsi_host_template.queuecommand routine) This is the primary SCSI IO start routine. Create a MPI SCSIIORequest from a linux scsi_cmnd request and send it to the IOC.

Returns 0. (rtn value discarded by linux scsi mid-layer)

int mptscsih_IssueTaskMgmt(MPT_SCSI_HOST * hd, u8 type, u8 channel, u8 id, u64 lun, int ctx2abort, ulong timeout)

Generic send Task Management function.

Parameters

MPT_SCSI_HOST * hd
Pointer to MPT_SCSI_HOST structure
u8 type
Task Management type
u8 channel
channel number for task management
u8 id
Logical Target ID for reset (if appropriate)
u64 lun
Logical Unit for reset (if appropriate)
int ctx2abort
Context for the task to be aborted (if appropriate)
ulong timeout
timeout for task management control

Description

Remark: _HardResetHandler can be invoked from an interrupt thread (timer) or a non-interrupt thread. In the former, must not call schedule().

Not all fields are meaningfull for all task types.

Returns 0 for SUCCESS, or FAILED.

int mptscsih_abort(struct scsi_cmnd * SCpnt)

Abort linux scsi_cmnd routine, new_eh variant

Parameters

struct scsi_cmnd * SCpnt
Pointer to scsi_cmnd structure, IO to be aborted

Description

(linux scsi_host_template.eh_abort_handler routine)

Returns SUCCESS or FAILED.

int mptscsih_dev_reset(struct scsi_cmnd * SCpnt)

Perform a SCSI TARGET_RESET! new_eh variant

Parameters

struct scsi_cmnd * SCpnt
Pointer to scsi_cmnd structure, IO which reset is due to

Description

(linux scsi_host_template.eh_dev_reset_handler routine)

Returns SUCCESS or FAILED.

int mptscsih_bus_reset(struct scsi_cmnd * SCpnt)

Perform a SCSI BUS_RESET! new_eh variant

Parameters

struct scsi_cmnd * SCpnt
Pointer to scsi_cmnd structure, IO which reset is due to

Description

(linux scsi_host_template.eh_bus_reset_handler routine)

Returns SUCCESS or FAILED.

int mptscsih_host_reset(struct scsi_cmnd * SCpnt)

Perform a SCSI host adapter RESET (new_eh variant)

Parameters

struct scsi_cmnd * SCpnt
Pointer to scsi_cmnd structure, IO which reset is due to

Description

(linux scsi_host_template.eh_host_reset_handler routine)

Returns SUCCESS or FAILED.

int mptscsih_taskmgmt_complete(MPT_ADAPTER * ioc, MPT_FRAME_HDR * mf, MPT_FRAME_HDR * mr)

Registered with Fusion MPT base driver

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
MPT_FRAME_HDR * mf
Pointer to SCSI task mgmt request frame
MPT_FRAME_HDR * mr
Pointer to SCSI task mgmt reply frame

Description

This routine is called from mptbase.c::mpt_interrupt() at the completion of any SCSI task management request. This routine is registered with the MPT (base) driver at driver load/init time via the mpt_register() API call.

Returns 1 indicating alloc’d request frame ptr should be freed.

struct scsi_cmnd * mptscsih_get_scsi_lookup(MPT_ADAPTER * ioc, int i)

retrieves scmd entry

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int i
index into the array

Description

Returns the scsi_cmd pointer

void mptscsih_info_scsiio(MPT_ADAPTER * ioc, struct scsi_cmnd * sc, SCSIIOReply_t * pScsiReply)

debug print info on reply frame

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
struct scsi_cmnd * sc
original scsi cmnd pointer
SCSIIOReply_t * pScsiReply
Pointer to MPT reply frame

Description

MPT_DEBUG_REPLY needs to be enabled to obtain this info

Refer to lsi/mpi.h.

struct scsi_cmnd * mptscsih_getclear_scsi_lookup(MPT_ADAPTER * ioc, int i)

retrieves and clears scmd entry from ScsiLookup[] array list

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int i
index into the array

Description

Returns the scsi_cmd pointer

void mptscsih_set_scsi_lookup(MPT_ADAPTER * ioc, int i, struct scsi_cmnd * scmd)

write a scmd entry into the ScsiLookup[] array list

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
int i
index into the array
struct scsi_cmnd * scmd
scsi_cmnd pointer
int SCPNT_TO_LOOKUP_IDX(MPT_ADAPTER * ioc, struct scsi_cmnd * sc)

searches for a given scmd in the ScsiLookup[] array list

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
struct scsi_cmnd * sc
scsi_cmnd pointer
int mptscsih_get_completion_code(MPT_ADAPTER * ioc, MPT_FRAME_HDR * req, MPT_FRAME_HDR * reply)

get completion code from MPT request

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
MPT_FRAME_HDR * req
Pointer to original MPT request frame
MPT_FRAME_HDR * reply
Pointer to MPT reply frame (NULL if TurboReply)
int mptscsih_do_cmd(MPT_SCSI_HOST * hd, INTERNAL_CMD * io)

Do internal command.

Parameters

MPT_SCSI_HOST * hd
MPT_SCSI_HOST pointer
INTERNAL_CMD * io
INTERNAL_CMD pointer.

Description

Issue the specified internally generated command and do command specific cleanup. For bus scan / DV only.

NOTES

If command is Inquiry and status is good,

initialize a target structure, save the data

Remark: Single threaded access only.

Return

< 0 if an illegal command or no resources

0 if good

> 0 if command complete but some type of completion error.

void mptscsih_synchronize_cache(MPT_SCSI_HOST * hd, VirtDevice * vdevice)

Send SYNCHRONIZE_CACHE to all disks.

Parameters

MPT_SCSI_HOST * hd
Pointer to a SCSI HOST structure
VirtDevice * vdevice
virtual target device

Description

Uses the ISR, but with special processing. MUST be single-threaded.
int mptctl_syscall_down(MPT_ADAPTER * ioc, int nonblock)

Down the MPT adapter syscall semaphore.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT adapter
int nonblock
boolean, non-zero if O_NONBLOCK is set

Description

All of the ioctl commands can potentially sleep, which is illegal with a spinlock held, thus we perform mutual exclusion here.

Returns negative errno on error, or zero for success.

void mptspi_setTargetNegoParms(MPT_SCSI_HOST * hd, VirtTarget * target, struct scsi_device * sdev)

Update the target negotiation parameters

Parameters

MPT_SCSI_HOST * hd
Pointer to a SCSI Host Structure
VirtTarget * target
per target private data
struct scsi_device * sdev
SCSI device

Description

Update the target negotiation parameters based on the the Inquiry data, adapter capabilities, and NVRAM settings.
int mptspi_writeIOCPage4(MPT_SCSI_HOST * hd, u8 channel, u8 id)

write IOC Page 4

Parameters

MPT_SCSI_HOST * hd
Pointer to a SCSI Host Structure
u8 channel
channel number
u8 id
write IOC Page4 for this ID & Bus

Return

-EAGAIN if unable to obtain a Message Frame
or 0 if success.

Remark: We do not wait for a return, write pages sequentially.

void mptspi_initTarget(MPT_SCSI_HOST * hd, VirtTarget * vtarget, struct scsi_device * sdev)

Target, LUN alloc/free functionality.

Parameters

MPT_SCSI_HOST * hd
Pointer to MPT_SCSI_HOST structure
VirtTarget * vtarget
per target private data
struct scsi_device * sdev
SCSI device

NOTE

It’s only SAFE to call this routine if data points to

sane & valid STANDARD INQUIRY data!

Allocate and initialize memory for this target. Save inquiry data.

int mptspi_is_raid(struct _MPT_SCSI_HOST * hd, u32 id)

Determines whether target is belonging to volume

Parameters

struct _MPT_SCSI_HOST * hd
Pointer to a SCSI HOST structure
u32 id
target device id

Return

non-zero = true zero = false
void mptspi_print_write_nego(struct _MPT_SCSI_HOST * hd, struct scsi_target * starget, u32 ii)

negotiation parameters debug info that is being sent

Parameters

struct _MPT_SCSI_HOST * hd
Pointer to a SCSI HOST structure
struct scsi_target * starget
SCSI target
u32 ii
negotiation parameters
void mptspi_print_read_nego(struct _MPT_SCSI_HOST * hd, struct scsi_target * starget, u32 ii)

negotiation parameters debug info that is being read

Parameters

struct _MPT_SCSI_HOST * hd
Pointer to a SCSI HOST structure
struct scsi_target * starget
SCSI target
u32 ii
negotiation parameters
int mptspi_init(void)

Register MPT adapter(s) as SCSI host(s) with SCSI mid-layer.

Parameters

void
no arguments

Description

Returns 0 for success, non-zero for failure.
void __exit mptspi_exit(void)

Unregisters MPT adapter(s)

Parameters

void
no arguments
int mptfc_init(void)

Register MPT adapter(s) as SCSI host(s) with SCSI mid-layer.

Parameters

void
no arguments

Description

Returns 0 for success, non-zero for failure.
void mptfc_remove(struct pci_dev * pdev)

Remove fc infrastructure for devices

Parameters

struct pci_dev * pdev
Pointer to pci_dev structure
void __exit mptfc_exit(void)

Unregisters MPT adapter(s)

Parameters

void
no arguments
int lan_reply(MPT_ADAPTER * ioc, MPT_FRAME_HDR * mf, MPT_FRAME_HDR * reply)

Handle all data sent from the hardware.

Parameters

MPT_ADAPTER * ioc
Pointer to MPT_ADAPTER structure
MPT_FRAME_HDR * mf
Pointer to original MPT request frame (NULL if TurboReply)
MPT_FRAME_HDR * reply
Pointer to MPT reply frame

Description

Returns 1 indicating original alloc’d request frame ptr should be freed, or 0 if it shouldn’t.

Sound Devices

snd_printk(fmt, args...)

printk wrapper

Parameters

fmt
format string
args...
variable arguments

Description

Works like printk() but prints the file and the line of the caller when configured with CONFIG_SND_VERBOSE_PRINTK.

snd_printd(fmt, args...)

debug printk

Parameters

fmt
format string
args...
variable arguments

Description

Works like snd_printk() for debugging purposes. Ignored when CONFIG_SND_DEBUG is not set.

snd_BUG()

give a BUG warning message and stack trace

Parameters

Description

Calls WARN() if CONFIG_SND_DEBUG is set. Ignored when CONFIG_SND_DEBUG is not set.

snd_printd_ratelimit()

Parameters

snd_BUG_ON(cond)

debugging check macro

Parameters

cond
condition to evaluate

Description

Has the same behavior as WARN_ON when CONFIG_SND_DEBUG is set, otherwise just evaluates the conditional and returns the value.

snd_printdd(format, args...)

debug printk

Parameters

format
format string
args...
variable arguments

Description

Works like snd_printk() for debugging purposes. Ignored when CONFIG_SND_DEBUG_VERBOSE is not set.

int register_sound_special_device(const struct file_operations * fops, int unit, struct device * dev)

register a special sound node

Parameters

const struct file_operations * fops
File operations for the driver
int unit
Unit number to allocate
struct device * dev
device pointer

Description

Allocate a special sound device by minor number from the sound subsystem.

Return

The allocated number is returned on success. On failure,
a negative error code is returned.
int register_sound_mixer(const struct file_operations * fops, int dev)

register a mixer device

Parameters

const struct file_operations * fops
File operations for the driver
int dev
Unit number to allocate

Description

Allocate a mixer device. Unit is the number of the mixer requested. Pass -1 to request the next free mixer unit.

Return

On success, the allocated number is returned. On failure,
a negative error code is returned.
int register_sound_midi(const struct file_operations * fops, int dev)

register a midi device

Parameters

const struct file_operations * fops
File operations for the driver
int dev
Unit number to allocate

Description

Allocate a midi device. Unit is the number of the midi device requested. Pass -1 to request the next free midi unit.

Return

On success, the allocated number is returned. On failure,
a negative error code is returned.
int register_sound_dsp(const struct file_operations * fops, int dev)

register a DSP device

Parameters

const struct file_operations * fops
File operations for the driver
int dev
Unit number to allocate

Description

Allocate a DSP device. Unit is the number of the DSP requested. Pass -1 to request the next free DSP unit.

This function allocates both the audio and dsp device entries together and will always allocate them as a matching pair - eg dsp3/audio3

Return

On success, the allocated number is returned. On failure,
a negative error code is returned.
void unregister_sound_special(int unit)

unregister a special sound device

Parameters

int unit
unit number to allocate

Description

Release a sound device that was allocated with register_sound_special(). The unit passed is the return value from the register function.
void unregister_sound_mixer(int unit)

unregister a mixer

Parameters

int unit
unit number to allocate

Description

Release a sound device that was allocated with register_sound_mixer(). The unit passed is the return value from the register function.
void unregister_sound_midi(int unit)

unregister a midi device

Parameters

int unit
unit number to allocate

Description

Release a sound device that was allocated with register_sound_midi(). The unit passed is the return value from the register function.
void unregister_sound_dsp(int unit)

unregister a DSP device

Parameters

int unit
unit number to allocate

Description

Release a sound device that was allocated with register_sound_dsp(). The unit passed is the return value from the register function.

Both of the allocated units are released together automatically.

int snd_pcm_stream_linked(struct snd_pcm_substream * substream)

Check whether the substream is linked with others

Parameters

struct snd_pcm_substream * substream
substream to check

Description

Returns true if the given substream is being linked with others.

snd_pcm_stream_lock_irqsave(substream, flags)

Lock the PCM stream

Parameters

substream
PCM substream
flags
irq flags

Description

This locks the PCM stream like snd_pcm_stream_lock() but with the local IRQ (only when nonatomic is false). In nonatomic case, this is identical as snd_pcm_stream_lock().

snd_pcm_group_for_each_entry(s, substream)

iterate over the linked substreams

Parameters

s
the iterator
substream
the substream

Description

Iterate over the all linked substreams to the given substream. When substream isn’t linked with any others, this gives returns substream itself once.

int snd_pcm_running(struct snd_pcm_substream * substream)

Check whether the substream is in a running state

Parameters

struct snd_pcm_substream * substream
substream to check

Description

Returns true if the given substream is in the state RUNNING, or in the state DRAINING for playback.

ssize_t bytes_to_samples(struct snd_pcm_runtime * runtime, ssize_t size)

Unit conversion of the size from bytes to samples

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
ssize_t size
size in bytes
snd_pcm_sframes_t bytes_to_frames(struct snd_pcm_runtime * runtime, ssize_t size)

Unit conversion of the size from bytes to frames

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
ssize_t size
size in bytes
ssize_t samples_to_bytes(struct snd_pcm_runtime * runtime, ssize_t size)

Unit conversion of the size from samples to bytes

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
ssize_t size
size in samples
ssize_t frames_to_bytes(struct snd_pcm_runtime * runtime, snd_pcm_sframes_t size)

Unit conversion of the size from frames to bytes

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
snd_pcm_sframes_t size
size in frames
int frame_aligned(struct snd_pcm_runtime * runtime, ssize_t bytes)

Check whether the byte size is aligned to frames

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
ssize_t bytes
size in bytes
size_t snd_pcm_lib_buffer_bytes(struct snd_pcm_substream * substream)

Get the buffer size of the current PCM in bytes

Parameters

struct snd_pcm_substream * substream
PCM substream
size_t snd_pcm_lib_period_bytes(struct snd_pcm_substream * substream)

Get the period size of the current PCM in bytes

Parameters

struct snd_pcm_substream * substream
PCM substream
snd_pcm_uframes_t snd_pcm_playback_avail(struct snd_pcm_runtime * runtime)

Get the available (writable) space for playback

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance

Description

Result is between 0 ... (boundary - 1)

snd_pcm_uframes_t snd_pcm_capture_avail(struct snd_pcm_runtime * runtime)

Get the available (readable) space for capture

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance

Description

Result is between 0 ... (boundary - 1)

snd_pcm_sframes_t snd_pcm_playback_hw_avail(struct snd_pcm_runtime * runtime)

Get the queued space for playback

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
snd_pcm_sframes_t snd_pcm_capture_hw_avail(struct snd_pcm_runtime * runtime)

Get the free space for capture

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
int snd_pcm_playback_ready(struct snd_pcm_substream * substream)

check whether the playback buffer is available

Parameters

struct snd_pcm_substream * substream
the pcm substream instance

Description

Checks whether enough free space is available on the playback buffer.

Return

Non-zero if available, or zero if not.

int snd_pcm_capture_ready(struct snd_pcm_substream * substream)

check whether the capture buffer is available

Parameters

struct snd_pcm_substream * substream
the pcm substream instance

Description

Checks whether enough capture data is available on the capture buffer.

Return

Non-zero if available, or zero if not.

int snd_pcm_playback_data(struct snd_pcm_substream * substream)

check whether any data exists on the playback buffer

Parameters

struct snd_pcm_substream * substream
the pcm substream instance

Description

Checks whether any data exists on the playback buffer.

Return

Non-zero if any data exists, or zero if not. If stop_threshold is bigger or equal to boundary, then this function returns always non-zero.

int snd_pcm_playback_empty(struct snd_pcm_substream * substream)

check whether the playback buffer is empty

Parameters

struct snd_pcm_substream * substream
the pcm substream instance

Description

Checks whether the playback buffer is empty.

Return

Non-zero if empty, or zero if not.

int snd_pcm_capture_empty(struct snd_pcm_substream * substream)

check whether the capture buffer is empty

Parameters

struct snd_pcm_substream * substream
the pcm substream instance

Description

Checks whether the capture buffer is empty.

Return

Non-zero if empty, or zero if not.

void snd_pcm_trigger_done(struct snd_pcm_substream * substream, struct snd_pcm_substream * master)

Mark the master substream

Parameters

struct snd_pcm_substream * substream
the pcm substream instance
struct snd_pcm_substream * master
the linked master substream

Description

When multiple substreams of the same card are linked and the hardware supports the single-shot operation, the driver calls this in the loop in snd_pcm_group_for_each_entry() for marking the substream as “done”. Then most of trigger operations are performed only to the given master substream.

The trigger_master mark is cleared at timestamp updates at the end of trigger operations.

unsigned int params_channels(const struct snd_pcm_hw_params * p)

Get the number of channels from the hw params

Parameters

const struct snd_pcm_hw_params * p
hw params
unsigned int params_rate(const struct snd_pcm_hw_params * p)

Get the sample rate from the hw params

Parameters

const struct snd_pcm_hw_params * p
hw params
unsigned int params_period_size(const struct snd_pcm_hw_params * p)

Get the period size (in frames) from the hw params

Parameters

const struct snd_pcm_hw_params * p
hw params
unsigned int params_periods(const struct snd_pcm_hw_params * p)

Get the number of periods from the hw params

Parameters

const struct snd_pcm_hw_params * p
hw params
unsigned int params_buffer_size(const struct snd_pcm_hw_params * p)

Get the buffer size (in frames) from the hw params

Parameters

const struct snd_pcm_hw_params * p
hw params
unsigned int params_buffer_bytes(const struct snd_pcm_hw_params * p)

Get the buffer size (in bytes) from the hw params

Parameters

const struct snd_pcm_hw_params * p
hw params
int snd_pcm_hw_constraint_single(struct snd_pcm_runtime * runtime, snd_pcm_hw_param_t var, unsigned int val)

Constrain parameter to a single value

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
snd_pcm_hw_param_t var
The hw_params variable to constrain
unsigned int val
The value to constrain to

Return

Positive if the value is changed, zero if it’s not changed, or a negative error code.

int snd_pcm_format_cpu_endian(snd_pcm_format_t format)

Check the PCM format is CPU-endian

Parameters

snd_pcm_format_t format
the format to check

Return

1 if the given PCM format is CPU-endian, 0 if opposite, or a negative error code if endian not specified.

void snd_pcm_set_runtime_buffer(struct snd_pcm_substream * substream, struct snd_dma_buffer * bufp)

Set the PCM runtime buffer

Parameters

struct snd_pcm_substream * substream
PCM substream to set
struct snd_dma_buffer * bufp
the buffer information, NULL to clear

Description

Copy the buffer information to runtime->dma_buffer when bufp is non-NULL. Otherwise it clears the current buffer information.

void snd_pcm_gettime(struct snd_pcm_runtime * runtime, struct timespec * tv)

Fill the timespec depending on the timestamp mode

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
struct timespec * tv
timespec to fill
int snd_pcm_lib_alloc_vmalloc_buffer(struct snd_pcm_substream * substream, size_t size)

allocate virtual DMA buffer

Parameters

struct snd_pcm_substream * substream
the substream to allocate the buffer to
size_t size
the requested buffer size, in bytes

Description

Allocates the PCM substream buffer using vmalloc(), i.e., the memory is contiguous in kernel virtual space, but not in physical memory. Use this if the buffer is accessed by kernel code but not by device DMA.

Return

1 if the buffer was changed, 0 if not changed, or a negative error code.

int snd_pcm_lib_alloc_vmalloc_32_buffer(struct snd_pcm_substream * substream, size_t size)

allocate 32-bit-addressable buffer

Parameters

struct snd_pcm_substream * substream
the substream to allocate the buffer to
size_t size
the requested buffer size, in bytes

Description

This function works like snd_pcm_lib_alloc_vmalloc_buffer(), but uses vmalloc_32(), i.e., the pages are allocated from 32-bit-addressable memory.

Return

1 if the buffer was changed, 0 if not changed, or a negative error code.

dma_addr_t snd_pcm_sgbuf_get_addr(struct snd_pcm_substream * substream, unsigned int ofs)

Get the DMA address at the corresponding offset

Parameters

struct snd_pcm_substream * substream
PCM substream
unsigned int ofs
byte offset
void * snd_pcm_sgbuf_get_ptr(struct snd_pcm_substream * substream, unsigned int ofs)

Get the virtual address at the corresponding offset

Parameters

struct snd_pcm_substream * substream
PCM substream
unsigned int ofs
byte offset
unsigned int snd_pcm_sgbuf_get_chunk_size(struct snd_pcm_substream * substream, unsigned int ofs, unsigned int size)

Compute the max size that fits within the contig. page from the given size

Parameters

struct snd_pcm_substream * substream
PCM substream
unsigned int ofs
byte offset
unsigned int size
byte size to examine
void snd_pcm_mmap_data_open(struct vm_area_struct * area)

increase the mmap counter

Parameters

struct vm_area_struct * area
VMA

Description

PCM mmap callback should handle this counter properly

void snd_pcm_mmap_data_close(struct vm_area_struct * area)

decrease the mmap counter

Parameters

struct vm_area_struct * area
VMA

Description

PCM mmap callback should handle this counter properly

void snd_pcm_limit_isa_dma_size(int dma, size_t * max)

Get the max size fitting with ISA DMA transfer

Parameters

int dma
DMA number
size_t * max
pointer to store the max size
const char * snd_pcm_stream_str(struct snd_pcm_substream * substream)

Get a string naming the direction of a stream

Parameters

struct snd_pcm_substream * substream
the pcm substream instance

Return

A string naming the direction of the stream.

struct snd_pcm_substream * snd_pcm_chmap_substream(struct snd_pcm_chmap * info, unsigned int idx)

get the PCM substream assigned to the given chmap info

Parameters

struct snd_pcm_chmap * info
chmap information
unsigned int idx
the substream number index
u64 pcm_format_to_bits(snd_pcm_format_t pcm_format)

Strong-typed conversion of pcm_format to bitwise

Parameters

snd_pcm_format_t pcm_format
PCM format
const char * snd_pcm_format_name(snd_pcm_format_t format)

Return a name string for the given PCM format

Parameters

snd_pcm_format_t format
PCM format
int snd_pcm_new_stream(struct snd_pcm * pcm, int stream, int substream_count)

create a new PCM stream

Parameters

struct snd_pcm * pcm
the pcm instance
int stream
the stream direction, SNDRV_PCM_STREAM_XXX
int substream_count
the number of substreams

Description

Creates a new stream for the pcm. The corresponding stream on the pcm must have been empty before calling this, i.e. zero must be given to the argument of snd_pcm_new().

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_new(struct snd_card * card, const char * id, int device, int playback_count, int capture_count, struct snd_pcm ** rpcm)

create a new PCM instance

Parameters

struct snd_card * card
the card instance
const char * id
the id string
int device
the device index (zero based)
int playback_count
the number of substreams for playback
int capture_count
the number of substreams for capture
struct snd_pcm ** rpcm
the pointer to store the new pcm instance

Description

Creates a new PCM instance.

The pcm operators have to be set afterwards to the new instance via snd_pcm_set_ops().

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_new_internal(struct snd_card * card, const char * id, int device, int playback_count, int capture_count, struct snd_pcm ** rpcm)

create a new internal PCM instance

Parameters

struct snd_card * card
the card instance
const char * id
the id string
int device
the device index (zero based - shared with normal PCMs)
int playback_count
the number of substreams for playback
int capture_count
the number of substreams for capture
struct snd_pcm ** rpcm
the pointer to store the new pcm instance

Description

Creates a new internal PCM instance with no userspace device or procfs entries. This is used by ASoC Back End PCMs in order to create a PCM that will only be used internally by kernel drivers. i.e. it cannot be opened by userspace. It provides existing ASoC components drivers with a substream and access to any private data.

The pcm operators have to be set afterwards to the new instance via snd_pcm_set_ops().

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_notify(struct snd_pcm_notify * notify, int nfree)

Add/remove the notify list

Parameters

struct snd_pcm_notify * notify
PCM notify list
int nfree
0 = register, 1 = unregister

Description

This adds the given notifier to the global list so that the callback is called for each registered PCM devices. This exists only for PCM OSS emulation, so far.

int snd_device_new(struct snd_card * card, enum snd_device_type type, void * device_data, struct snd_device_ops * ops)

create an ALSA device component

Parameters

struct snd_card * card
the card instance
enum snd_device_type type
the device type, SNDRV_DEV_XXX
void * device_data
the data pointer of this device
struct snd_device_ops * ops
the operator table

Description

Creates a new device component for the given data pointer. The device will be assigned to the card and managed together by the card.

The data pointer plays a role as the identifier, too, so the pointer address must be unique and unchanged.

Return

Zero if successful, or a negative error code on failure.

void snd_device_disconnect(struct snd_card * card, void * device_data)

disconnect the device

Parameters

struct snd_card * card
the card instance
void * device_data
the data pointer to disconnect

Description

Turns the device into the disconnection state, invoking dev_disconnect callback, if the device was already registered.

Usually called from snd_card_disconnect().

Return

Zero if successful, or a negative error code on failure or if the device not found.

void snd_device_free(struct snd_card * card, void * device_data)

release the device from the card

Parameters

struct snd_card * card
the card instance
void * device_data
the data pointer to release

Description

Removes the device from the list on the card and invokes the callbacks, dev_disconnect and dev_free, corresponding to the state. Then release the device.

int snd_device_register(struct snd_card * card, void * device_data)

register the device

Parameters

struct snd_card * card
the card instance
void * device_data
the data pointer to register

Description

Registers the device which was already created via snd_device_new(). Usually this is called from snd_card_register(), but it can be called later if any new devices are created after invocation of snd_card_register().

Return

Zero if successful, or a negative error code on failure or if the device not found.

int snd_info_get_line(struct snd_info_buffer * buffer, char * line, int len)

read one line from the procfs buffer

Parameters

struct snd_info_buffer * buffer
the procfs buffer
char * line
the buffer to store
int len
the max. buffer size

Description

Reads one line from the buffer and stores the string.

Return

Zero if successful, or 1 if error or EOF.

const char * snd_info_get_str(char * dest, const char * src, int len)

parse a string token

Parameters

char * dest
the buffer to store the string token
const char * src
the original string
int len
the max. length of token - 1

Description

Parses the original string and copy a token to the given string buffer.

Return

The updated pointer of the original string so that it can be used for the next call.

struct snd_info_entry * snd_info_create_module_entry(struct module * module, const char * name, struct snd_info_entry * parent)

create an info entry for the given module

Parameters

struct module * module
the module pointer
const char * name
the file name
struct snd_info_entry * parent
the parent directory

Description

Creates a new info entry and assigns it to the given module.

Return

The pointer of the new instance, or NULL on failure.

struct snd_info_entry * snd_info_create_card_entry(struct snd_card * card, const char * name, struct snd_info_entry * parent)

create an info entry for the given card

Parameters

struct snd_card * card
the card instance
const char * name
the file name
struct snd_info_entry * parent
the parent directory

Description

Creates a new info entry and assigns it to the given card.

Return

The pointer of the new instance, or NULL on failure.

void snd_info_free_entry(struct snd_info_entry * entry)

release the info entry

Parameters

struct snd_info_entry * entry
the info entry

Description

Releases the info entry.

int snd_info_register(struct snd_info_entry * entry)

register the info entry

Parameters

struct snd_info_entry * entry
the info entry

Description

Registers the proc info entry.

Return

Zero if successful, or a negative error code on failure.

int snd_rawmidi_receive(struct snd_rawmidi_substream * substream, const unsigned char * buffer, int count)

receive the input data from the device

Parameters

struct snd_rawmidi_substream * substream
the rawmidi substream
const unsigned char * buffer
the buffer pointer
int count
the data size to read

Description

Reads the data from the internal buffer.

Return

The size of read data, or a negative error code on failure.

int snd_rawmidi_transmit_empty(struct snd_rawmidi_substream * substream)

check whether the output buffer is empty

Parameters

struct snd_rawmidi_substream * substream
the rawmidi substream

Return

1 if the internal output buffer is empty, 0 if not.

int __snd_rawmidi_transmit_peek(struct snd_rawmidi_substream * substream, unsigned char * buffer, int count)

copy data from the internal buffer

Parameters

struct snd_rawmidi_substream * substream
the rawmidi substream
unsigned char * buffer
the buffer pointer
int count
data size to transfer

Description

This is a variant of snd_rawmidi_transmit_peek() without spinlock.

int snd_rawmidi_transmit_peek(struct snd_rawmidi_substream * substream, unsigned char * buffer, int count)

copy data from the internal buffer

Parameters

struct snd_rawmidi_substream * substream
the rawmidi substream
unsigned char * buffer
the buffer pointer
int count
data size to transfer

Description

Copies data from the internal output buffer to the given buffer.

Call this in the interrupt handler when the midi output is ready, and call snd_rawmidi_transmit_ack() after the transmission is finished.

Return

The size of copied data, or a negative error code on failure.

int __snd_rawmidi_transmit_ack(struct snd_rawmidi_substream * substream, int count)

acknowledge the transmission

Parameters

struct snd_rawmidi_substream * substream
the rawmidi substream
int count
the transferred count

Description

This is a variant of __snd_rawmidi_transmit_ack() without spinlock.

int snd_rawmidi_transmit_ack(struct snd_rawmidi_substream * substream, int count)

acknowledge the transmission

Parameters

struct snd_rawmidi_substream * substream
the rawmidi substream
int count
the transferred count

Description

Advances the hardware pointer for the internal output buffer with the given size and updates the condition. Call after the transmission is finished.

Return

The advanced size if successful, or a negative error code on failure.

int snd_rawmidi_transmit(struct snd_rawmidi_substream * substream, unsigned char * buffer, int count)

copy from the buffer to the device

Parameters

struct snd_rawmidi_substream * substream
the rawmidi substream
unsigned char * buffer
the buffer pointer
int count
the data size to transfer

Description

Copies data from the buffer to the device and advances the pointer.

Return

The copied size if successful, or a negative error code on failure.

int snd_rawmidi_new(struct snd_card * card, char * id, int device, int output_count, int input_count, struct snd_rawmidi ** rrawmidi)

create a rawmidi instance

Parameters

struct snd_card * card
the card instance
char * id
the id string
int device
the device index
int output_count
the number of output streams
int input_count
the number of input streams
struct snd_rawmidi ** rrawmidi
the pointer to store the new rawmidi instance

Description

Creates a new rawmidi instance. Use snd_rawmidi_set_ops() to set the operators to the new instance.

Return

Zero if successful, or a negative error code on failure.

void snd_rawmidi_set_ops(struct snd_rawmidi * rmidi, int stream, struct snd_rawmidi_ops * ops)

set the rawmidi operators

Parameters

struct snd_rawmidi * rmidi
the rawmidi instance
int stream
the stream direction, SNDRV_RAWMIDI_STREAM_XXX
struct snd_rawmidi_ops * ops
the operator table

Description

Sets the rawmidi operators for the given stream direction.

void snd_request_card(int card)

try to load the card module

Parameters

int card
the card number

Description

Tries to load the module “snd-card-X” for the given card number via request_module. Returns immediately if already loaded.

void * snd_lookup_minor_data(unsigned int minor, int type)

get user data of a registered device

Parameters

unsigned int minor
the minor number
int type
device type (SNDRV_DEVICE_TYPE_XXX)

Description

Checks that a minor device with the specified type is registered, and returns its user data pointer.

This function increments the reference counter of the card instance if an associated instance with the given minor number and type is found. The caller must call snd_card_unref() appropriately later.

Return

The user data pointer if the specified device is found. NULL otherwise.

int snd_register_device(int type, struct snd_card * card, int dev, const struct file_operations * f_ops, void * private_data, struct device * device)

Register the ALSA device file for the card

Parameters

int type
the device type, SNDRV_DEVICE_TYPE_XXX
struct snd_card * card
the card instance
int dev
the device index
const struct file_operations * f_ops
the file operations
void * private_data
user pointer for f_ops->:c:func:open()
struct device * device
the device to register

Description

Registers an ALSA device file for the given card. The operators have to be set in reg parameter.

Return

Zero if successful, or a negative error code on failure.

int snd_unregister_device(struct device * dev)

unregister the device on the given card

Parameters

struct device * dev
the device instance

Description

Unregisters the device file already registered via snd_register_device().

Return

Zero if successful, or a negative error code on failure.

int copy_to_user_fromio(void __user * dst, const volatile void __iomem * src, size_t count)

copy data from mmio-space to user-space

Parameters

void __user * dst
the destination pointer on user-space
const volatile void __iomem * src
the source pointer on mmio
size_t count
the data size to copy in bytes

Description

Copies the data from mmio-space to user-space.

Return

Zero if successful, or non-zero on failure.

int copy_from_user_toio(volatile void __iomem * dst, const void __user * src, size_t count)

copy data from user-space to mmio-space

Parameters

volatile void __iomem * dst
the destination pointer on mmio-space
const void __user * src
the source pointer on user-space
size_t count
the data size to copy in bytes

Description

Copies the data from user-space to mmio-space.

Return

Zero if successful, or non-zero on failure.

int snd_pcm_lib_preallocate_free_for_all(struct snd_pcm * pcm)

release all pre-allocated buffers on the pcm

Parameters

struct snd_pcm * pcm
the pcm instance

Description

Releases all the pre-allocated buffers on the given pcm.

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_lib_preallocate_pages(struct snd_pcm_substream * substream, int type, struct device * data, size_t size, size_t max)

pre-allocation for the given DMA type

Parameters

struct snd_pcm_substream * substream
the pcm substream instance
int type
DMA type (SNDRV_DMA_TYPE_*)
struct device * data
DMA type dependent data
size_t size
the requested pre-allocation size in bytes
size_t max
the max. allowed pre-allocation size

Description

Do pre-allocation for the given DMA buffer type.

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_lib_preallocate_pages_for_all(struct snd_pcm * pcm, int type, void * data, size_t size, size_t max)

pre-allocation for continuous memory type (all substreams)

Parameters

struct snd_pcm * pcm
the pcm instance
int type
DMA type (SNDRV_DMA_TYPE_*)
void * data
DMA type dependent data
size_t size
the requested pre-allocation size in bytes
size_t max
the max. allowed pre-allocation size

Description

Do pre-allocation to all substreams of the given pcm for the specified DMA type.

Return

Zero if successful, or a negative error code on failure.

struct page * snd_pcm_sgbuf_ops_page(struct snd_pcm_substream * substream, unsigned long offset)

get the page struct at the given offset

Parameters

struct snd_pcm_substream * substream
the pcm substream instance
unsigned long offset
the buffer offset

Description

Used as the page callback of PCM ops.

Return

The page struct at the given buffer offset. NULL on failure.

int snd_pcm_lib_malloc_pages(struct snd_pcm_substream * substream, size_t size)

allocate the DMA buffer

Parameters

struct snd_pcm_substream * substream
the substream to allocate the DMA buffer to
size_t size
the requested buffer size in bytes

Description

Allocates the DMA buffer on the BUS type given earlier to snd_pcm_lib_preallocate_xxx_pages().

Return

1 if the buffer is changed, 0 if not changed, or a negative code on failure.

int snd_pcm_lib_free_pages(struct snd_pcm_substream * substream)

release the allocated DMA buffer.

Parameters

struct snd_pcm_substream * substream
the substream to release the DMA buffer

Description

Releases the DMA buffer allocated via snd_pcm_lib_malloc_pages().

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_lib_free_vmalloc_buffer(struct snd_pcm_substream * substream)

free vmalloc buffer

Parameters

struct snd_pcm_substream * substream
the substream with a buffer allocated by snd_pcm_lib_alloc_vmalloc_buffer()

Return

Zero if successful, or a negative error code on failure.

struct page * snd_pcm_lib_get_vmalloc_page(struct snd_pcm_substream * substream, unsigned long offset)

map vmalloc buffer offset to page struct

Parameters

struct snd_pcm_substream * substream
the substream with a buffer allocated by snd_pcm_lib_alloc_vmalloc_buffer()
unsigned long offset
offset in the buffer

Description

This function is to be used as the page callback in the PCM ops.

Return

The page struct, or NULL on failure.

void snd_device_initialize(struct device * dev, struct snd_card * card)

Initialize struct device for sound devices

Parameters

struct device * dev
device to initialize
struct snd_card * card
card to assign, optional
int snd_card_new(struct device * parent, int idx, const char * xid, struct module * module, int extra_size, struct snd_card ** card_ret)

create and initialize a soundcard structure

Parameters

struct device * parent
the parent device object
int idx
card index (address) [0 ... (SNDRV_CARDS-1)]
const char * xid
card identification (ASCII string)
struct module * module
top level module for locking
int extra_size
allocate this extra size after the main soundcard structure
struct snd_card ** card_ret
the pointer to store the created card instance

Description

Creates and initializes a soundcard structure.

The function allocates snd_card instance via kzalloc with the given space for the driver to use freely. The allocated struct is stored in the given card_ret pointer.

Return

Zero if successful or a negative error code.

int snd_card_disconnect(struct snd_card * card)

disconnect all APIs from the file-operations (user space)

Parameters

struct snd_card * card
soundcard structure

Description

Disconnects all APIs from the file-operations (user space).

Return

Zero, otherwise a negative error code.

Note

The current implementation replaces all active file->f_op with special
dummy file operations (they do nothing except release).
int snd_card_free_when_closed(struct snd_card * card)

Disconnect the card, free it later eventually

Parameters

struct snd_card * card
soundcard structure

Description

Unlike snd_card_free(), this function doesn’t try to release the card resource immediately, but tries to disconnect at first. When the card is still in use, the function returns before freeing the resources. The card resources will be freed when the refcount gets to zero.

int snd_card_free(struct snd_card * card)

frees given soundcard structure

Parameters

struct snd_card * card
soundcard structure

Description

This function releases the soundcard structure and the all assigned devices automatically. That is, you don’t have to release the devices by yourself.

This function waits until the all resources are properly released.

Return

Zero. Frees all associated devices and frees the control interface associated to given soundcard.

void snd_card_set_id(struct snd_card * card, const char * nid)

set card identification name

Parameters

struct snd_card * card
soundcard structure
const char * nid
new identification string

Description

This function sets the card identification and checks for name collisions.
int snd_card_add_dev_attr(struct snd_card * card, const struct attribute_group * group)

Append a new sysfs attribute group to card

Parameters

struct snd_card * card
card instance
const struct attribute_group * group
attribute group to append
int snd_card_register(struct snd_card * card)

register the soundcard

Parameters

struct snd_card * card
soundcard structure

Description

This function registers all the devices assigned to the soundcard. Until calling this, the ALSA control interface is blocked from the external accesses. Thus, you should call this function at the end of the initialization of the card.

Return

Zero otherwise a negative error code if the registration failed.

int snd_component_add(struct snd_card * card, const char * component)

add a component string

Parameters

struct snd_card * card
soundcard structure
const char * component
the component id string

Description

This function adds the component id string to the supported list. The component can be referred from the alsa-lib.

Return

Zero otherwise a negative error code.

int snd_card_file_add(struct snd_card * card, struct file * file)

add the file to the file list of the card

Parameters

struct snd_card * card
soundcard structure
struct file * file
file pointer

Description

This function adds the file to the file linked-list of the card. This linked-list is used to keep tracking the connection state, and to avoid the release of busy resources by hotplug.

Return

zero or a negative error code.

int snd_card_file_remove(struct snd_card * card, struct file * file)

remove the file from the file list

Parameters

struct snd_card * card
soundcard structure
struct file * file
file pointer

Description

This function removes the file formerly added to the card via snd_card_file_add() function. If all files are removed and snd_card_free_when_closed() was called beforehand, it processes the pending release of resources.

Return

Zero or a negative error code.

int snd_power_wait(struct snd_card * card, unsigned int power_state)

wait until the power-state is changed.

Parameters

struct snd_card * card
soundcard structure
unsigned int power_state
expected power state

Description

Waits until the power-state is changed.

Return

Zero if successful, or a negative error code.

Note

the power lock must be active before call.

void snd_dma_program(unsigned long dma, unsigned long addr, unsigned int size, unsigned short mode)

program an ISA DMA transfer

Parameters

unsigned long dma
the dma number
unsigned long addr
the physical address of the buffer
unsigned int size
the DMA transfer size
unsigned short mode
the DMA transfer mode, DMA_MODE_XXX

Description

Programs an ISA DMA transfer for the given buffer.

void snd_dma_disable(unsigned long dma)

stop the ISA DMA transfer

Parameters

unsigned long dma
the dma number

Description

Stops the ISA DMA transfer.

unsigned int snd_dma_pointer(unsigned long dma, unsigned int size)

return the current pointer to DMA transfer buffer in bytes

Parameters

unsigned long dma
the dma number
unsigned int size
the dma transfer size

Return

The current pointer in DMA transfer buffer in bytes.

void snd_ctl_notify(struct snd_card * card, unsigned int mask, struct snd_ctl_elem_id * id)

Send notification to user-space for a control change

Parameters

struct snd_card * card
the card to send notification
unsigned int mask
the event mask, SNDRV_CTL_EVENT_*
struct snd_ctl_elem_id * id
the ctl element id to send notification

Description

This function adds an event record with the given id and mask, appends to the list and wakes up the user-space for notification. This can be called in the atomic context.

struct snd_kcontrol * snd_ctl_new1(const struct snd_kcontrol_new * ncontrol, void * private_data)

create a control instance from the template

Parameters

const struct snd_kcontrol_new * ncontrol
the initialization record
void * private_data
the private data to set

Description

Allocates a new struct snd_kcontrol instance and initialize from the given template. When the access field of ncontrol is 0, it’s assumed as READWRITE access. When the count field is 0, it’s assumes as one.

Return

The pointer of the newly generated instance, or NULL on failure.

void snd_ctl_free_one(struct snd_kcontrol * kcontrol)

release the control instance

Parameters

struct snd_kcontrol * kcontrol
the control instance

Description

Releases the control instance created via snd_ctl_new() or snd_ctl_new1(). Don’t call this after the control was added to the card.

int snd_ctl_add(struct snd_card * card, struct snd_kcontrol * kcontrol)

add the control instance to the card

Parameters

struct snd_card * card
the card instance
struct snd_kcontrol * kcontrol
the control instance to add

Description

Adds the control instance created via snd_ctl_new() or snd_ctl_new1() to the given card. Assigns also an unique numid used for fast search.

It frees automatically the control which cannot be added.

Return

Zero if successful, or a negative error code on failure.

int snd_ctl_replace(struct snd_card * card, struct snd_kcontrol * kcontrol, bool add_on_replace)

replace the control instance of the card

Parameters

struct snd_card * card
the card instance
struct snd_kcontrol * kcontrol
the control instance to replace
bool add_on_replace
add the control if not already added

Description

Replaces the given control. If the given control does not exist and the add_on_replace flag is set, the control is added. If the control exists, it is destroyed first.

It frees automatically the control which cannot be added or replaced.

Return

Zero if successful, or a negative error code on failure.

int snd_ctl_remove(struct snd_card * card, struct snd_kcontrol * kcontrol)

remove the control from the card and release it

Parameters

struct snd_card * card
the card instance
struct snd_kcontrol * kcontrol
the control instance to remove

Description

Removes the control from the card and then releases the instance. You don’t need to call snd_ctl_free_one(). You must be in the write lock - down_write(card->controls_rwsem).

Return

0 if successful, or a negative error code on failure.

int snd_ctl_remove_id(struct snd_card * card, struct snd_ctl_elem_id * id)

remove the control of the given id and release it

Parameters

struct snd_card * card
the card instance
struct snd_ctl_elem_id * id
the control id to remove

Description

Finds the control instance with the given id, removes it from the card list and releases it.

Return

0 if successful, or a negative error code on failure.

int snd_ctl_activate_id(struct snd_card * card, struct snd_ctl_elem_id * id, int active)

activate/inactivate the control of the given id

Parameters

struct snd_card * card
the card instance
struct snd_ctl_elem_id * id
the control id to activate/inactivate
int active
non-zero to activate

Description

Finds the control instance with the given id, and activate or inactivate the control together with notification, if changed. The given ID data is filled with full information.

Return

0 if unchanged, 1 if changed, or a negative error code on failure.

int snd_ctl_rename_id(struct snd_card * card, struct snd_ctl_elem_id * src_id, struct snd_ctl_elem_id * dst_id)

replace the id of a control on the card

Parameters

struct snd_card * card
the card instance
struct snd_ctl_elem_id * src_id
the old id
struct snd_ctl_elem_id * dst_id
the new id

Description

Finds the control with the old id from the card, and replaces the id with the new one.

Return

Zero if successful, or a negative error code on failure.

struct snd_kcontrol * snd_ctl_find_numid(struct snd_card * card, unsigned int numid)

find the control instance with the given number-id

Parameters

struct snd_card * card
the card instance
unsigned int numid
the number-id to search

Description

Finds the control instance with the given number-id from the card.

The caller must down card->controls_rwsem before calling this function (if the race condition can happen).

Return

The pointer of the instance if found, or NULL if not.

struct snd_kcontrol * snd_ctl_find_id(struct snd_card * card, struct snd_ctl_elem_id * id)

find the control instance with the given id

Parameters

struct snd_card * card
the card instance
struct snd_ctl_elem_id * id
the id to search

Description

Finds the control instance with the given id from the card.

The caller must down card->controls_rwsem before calling this function (if the race condition can happen).

Return

The pointer of the instance if found, or NULL if not.

int snd_ctl_register_ioctl(snd_kctl_ioctl_func_t fcn)

register the device-specific control-ioctls

Parameters

snd_kctl_ioctl_func_t fcn
ioctl callback function

Description

called from each device manager like pcm.c, hwdep.c, etc.

int snd_ctl_register_ioctl_compat(snd_kctl_ioctl_func_t fcn)

register the device-specific 32bit compat control-ioctls

Parameters

snd_kctl_ioctl_func_t fcn
ioctl callback function
int snd_ctl_unregister_ioctl(snd_kctl_ioctl_func_t fcn)

de-register the device-specific control-ioctls

Parameters

snd_kctl_ioctl_func_t fcn
ioctl callback function to unregister
int snd_ctl_unregister_ioctl_compat(snd_kctl_ioctl_func_t fcn)

de-register the device-specific compat 32bit control-ioctls

Parameters

snd_kctl_ioctl_func_t fcn
ioctl callback function to unregister
int snd_ctl_boolean_mono_info(struct snd_kcontrol * kcontrol, struct snd_ctl_elem_info * uinfo)

Helper function for a standard boolean info callback with a mono channel

Parameters

struct snd_kcontrol * kcontrol
the kcontrol instance
struct snd_ctl_elem_info * uinfo
info to store

Description

This is a function that can be used as info callback for a standard boolean control with a single mono channel.

int snd_ctl_boolean_stereo_info(struct snd_kcontrol * kcontrol, struct snd_ctl_elem_info * uinfo)

Helper function for a standard boolean info callback with stereo two channels

Parameters

struct snd_kcontrol * kcontrol
the kcontrol instance
struct snd_ctl_elem_info * uinfo
info to store

Description

This is a function that can be used as info callback for a standard boolean control with stereo two channels.

int snd_ctl_enum_info(struct snd_ctl_elem_info * info, unsigned int channels, unsigned int items, const char *const names[])

fills the info structure for an enumerated control

Parameters

struct snd_ctl_elem_info * info
the structure to be filled
unsigned int channels
the number of the control’s channels; often one
unsigned int items
the number of control values; also the size of names
const char *const names[]
undescribed

Description

Sets all required fields in info to their appropriate values. If the control’s accessibility is not the default (readable and writable), the caller has to fill info->access.

Return

Zero.

void snd_pcm_set_ops(struct snd_pcm * pcm, int direction, const struct snd_pcm_ops * ops)

set the PCM operators

Parameters

struct snd_pcm * pcm
the pcm instance
int direction
stream direction, SNDRV_PCM_STREAM_XXX
const struct snd_pcm_ops * ops
the operator table

Description

Sets the given PCM operators to the pcm instance.

void snd_pcm_set_sync(struct snd_pcm_substream * substream)

set the PCM sync id

Parameters

struct snd_pcm_substream * substream
the pcm substream

Description

Sets the PCM sync identifier for the card.

int snd_interval_refine(struct snd_interval * i, const struct snd_interval * v)

refine the interval value of configurator

Parameters

struct snd_interval * i
the interval value to refine
const struct snd_interval * v
the interval value to refer to

Description

Refines the interval value with the reference value. The interval is changed to the range satisfying both intervals. The interval status (min, max, integer, etc.) are evaluated.

Return

Positive if the value is changed, zero if it’s not changed, or a negative error code.

int snd_interval_ratnum(struct snd_interval * i, unsigned int rats_count, const struct snd_ratnum * rats, unsigned int * nump, unsigned int * denp)

refine the interval value

Parameters

struct snd_interval * i
interval to refine
unsigned int rats_count
number of ratnum_t
const struct snd_ratnum * rats
ratnum_t array
unsigned int * nump
pointer to store the resultant numerator
unsigned int * denp
pointer to store the resultant denominator

Return

Positive if the value is changed, zero if it’s not changed, or a negative error code.

int snd_interval_list(struct snd_interval * i, unsigned int count, const unsigned int * list, unsigned int mask)

refine the interval value from the list

Parameters

struct snd_interval * i
the interval value to refine
unsigned int count
the number of elements in the list
const unsigned int * list
the value list
unsigned int mask
the bit-mask to evaluate

Description

Refines the interval value from the list. When mask is non-zero, only the elements corresponding to bit 1 are evaluated.

Return

Positive if the value is changed, zero if it’s not changed, or a negative error code.

int snd_interval_ranges(struct snd_interval * i, unsigned int count, const struct snd_interval * ranges, unsigned int mask)

refine the interval value from the list of ranges

Parameters

struct snd_interval * i
the interval value to refine
unsigned int count
the number of elements in the list of ranges
const struct snd_interval * ranges
the ranges list
unsigned int mask
the bit-mask to evaluate

Description

Refines the interval value from the list of ranges. When mask is non-zero, only the elements corresponding to bit 1 are evaluated.

Return

Positive if the value is changed, zero if it’s not changed, or a negative error code.

int snd_pcm_hw_rule_add(struct snd_pcm_runtime * runtime, unsigned int cond, int var, snd_pcm_hw_rule_func_t func, void * private, int dep, ...)

add the hw-constraint rule

Parameters

struct snd_pcm_runtime * runtime
the pcm runtime instance
unsigned int cond
condition bits
int var
the variable to evaluate
snd_pcm_hw_rule_func_t func
the evaluation function
void * private
the private data pointer passed to function
int dep
the dependent variables
...
variable arguments

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_mask64(struct snd_pcm_runtime * runtime, snd_pcm_hw_param_t var, u_int64_t mask)

apply the given bitmap mask constraint

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
snd_pcm_hw_param_t var
hw_params variable to apply the mask
u_int64_t mask
the 64bit bitmap mask

Description

Apply the constraint of the given bitmap mask to a 64-bit mask parameter.

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_integer(struct snd_pcm_runtime * runtime, snd_pcm_hw_param_t var)

apply an integer constraint to an interval

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
snd_pcm_hw_param_t var
hw_params variable to apply the integer constraint

Description

Apply the constraint of integer to an interval parameter.

Return

Positive if the value is changed, zero if it’s not changed, or a negative error code.

int snd_pcm_hw_constraint_minmax(struct snd_pcm_runtime * runtime, snd_pcm_hw_param_t var, unsigned int min, unsigned int max)

apply a min/max range constraint to an interval

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
snd_pcm_hw_param_t var
hw_params variable to apply the range
unsigned int min
the minimal value
unsigned int max
the maximal value

Description

Apply the min/max range constraint to an interval parameter.

Return

Positive if the value is changed, zero if it’s not changed, or a negative error code.

int snd_pcm_hw_constraint_list(struct snd_pcm_runtime * runtime, unsigned int cond, snd_pcm_hw_param_t var, const struct snd_pcm_hw_constraint_list * l)

apply a list of constraints to a parameter

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int cond
condition bits
snd_pcm_hw_param_t var
hw_params variable to apply the list constraint
const struct snd_pcm_hw_constraint_list * l
list

Description

Apply the list of constraints to an interval parameter.

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_ranges(struct snd_pcm_runtime * runtime, unsigned int cond, snd_pcm_hw_param_t var, const struct snd_pcm_hw_constraint_ranges * r)

apply list of range constraints to a parameter

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int cond
condition bits
snd_pcm_hw_param_t var
hw_params variable to apply the list of range constraints
const struct snd_pcm_hw_constraint_ranges * r
ranges

Description

Apply the list of range constraints to an interval parameter.

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_ratnums(struct snd_pcm_runtime * runtime, unsigned int cond, snd_pcm_hw_param_t var, const struct snd_pcm_hw_constraint_ratnums * r)

apply ratnums constraint to a parameter

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int cond
condition bits
snd_pcm_hw_param_t var
hw_params variable to apply the ratnums constraint
const struct snd_pcm_hw_constraint_ratnums * r
struct snd_ratnums constriants

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_ratdens(struct snd_pcm_runtime * runtime, unsigned int cond, snd_pcm_hw_param_t var, const struct snd_pcm_hw_constraint_ratdens * r)

apply ratdens constraint to a parameter

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int cond
condition bits
snd_pcm_hw_param_t var
hw_params variable to apply the ratdens constraint
const struct snd_pcm_hw_constraint_ratdens * r
struct snd_ratdens constriants

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_msbits(struct snd_pcm_runtime * runtime, unsigned int cond, unsigned int width, unsigned int msbits)

add a hw constraint msbits rule

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int cond
condition bits
unsigned int width
sample bits width
unsigned int msbits
msbits width

Description

This constraint will set the number of most significant bits (msbits) if a sample format with the specified width has been select. If width is set to 0 the msbits will be set for any sample format with a width larger than the specified msbits.

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_step(struct snd_pcm_runtime * runtime, unsigned int cond, snd_pcm_hw_param_t var, unsigned long step)

add a hw constraint step rule

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int cond
condition bits
snd_pcm_hw_param_t var
hw_params variable to apply the step constraint
unsigned long step
step size

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_constraint_pow2(struct snd_pcm_runtime * runtime, unsigned int cond, snd_pcm_hw_param_t var)

add a hw constraint power-of-2 rule

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int cond
condition bits
snd_pcm_hw_param_t var
hw_params variable to apply the power-of-2 constraint

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_rule_noresample(struct snd_pcm_runtime * runtime, unsigned int base_rate)

add a rule to allow disabling hw resampling

Parameters

struct snd_pcm_runtime * runtime
PCM runtime instance
unsigned int base_rate
the rate at which the hardware does not resample

Return

Zero if successful, or a negative error code on failure.

int snd_pcm_hw_param_value(const struct snd_pcm_hw_params * params, snd_pcm_hw_param_t var, int * dir)

return params field var value

Parameters

const struct snd_pcm_hw_params * params
the hw_params instance
snd_pcm_hw_param_t var
parameter to retrieve
int * dir
pointer to the direction (-1,0,1) or NULL

Return

The value for field var if it’s fixed in configuration space defined by params. -EINVAL otherwise.

int snd_pcm_hw_param_first(struct snd_pcm_substream * pcm, struct snd_pcm_hw_params * params, snd_pcm_hw_param_t var, int * dir)

refine config space and return minimum value

Parameters

struct snd_pcm_substream * pcm
PCM instance
struct snd_pcm_hw_params * params
the hw_params instance
snd_pcm_hw_param_t var
parameter to retrieve
int * dir
pointer to the direction (-1,0,1) or NULL

Description

Inside configuration space defined by params remove from var all values > minimum. Reduce configuration space accordingly.

Return

The minimum, or a negative error code on failure.

int snd_pcm_hw_param_last(struct snd_pcm_substream * pcm, struct snd_pcm_hw_params * params, snd_pcm_hw_param_t var, int * dir)

refine config space and return maximum value

Parameters

struct snd_pcm_substream * pcm
PCM instance
struct snd_pcm_hw_params * params
the hw_params instance
snd_pcm_hw_param_t var
parameter to retrieve
int * dir
pointer to the direction (-1,0,1) or NULL

Description

Inside configuration space defined by params remove from var all values < maximum. Reduce configuration space accordingly.

Return

The maximum, or a negative error code on failure.

int snd_pcm_lib_ioctl(struct snd_pcm_substream * substream, unsigned int cmd, void * arg)

a generic PCM ioctl callback

Parameters

struct snd_pcm_substream * substream
the pcm substream instance
unsigned int cmd
ioctl command
void * arg
ioctl argument

Description

Processes the generic ioctl commands for PCM. Can be passed as the ioctl callback for PCM ops.

Return

Zero if successful, or a negative error code on failure.

void snd_pcm_period_elapsed(struct snd_pcm_substream * substream)

update the pcm status for the next period

Parameters

struct snd_pcm_substream * substream
the pcm substream instance

Description

This function is called from the interrupt handler when the PCM has processed the period size. It will update the current pointer, wake up sleepers, etc.

Even if more than one periods have elapsed since the last call, you have to call this only once.

int snd_pcm_add_chmap_ctls(struct snd_pcm * pcm, int stream, const struct snd_pcm_chmap_elem * chmap, int max_channels, unsigned long private_value, struct snd_pcm_chmap ** info_ret)

create channel-mapping control elements

Parameters

struct snd_pcm * pcm
the assigned PCM instance
int stream
stream direction
const struct snd_pcm_chmap_elem * chmap
channel map elements (for query)
int max_channels
the max number of channels for the stream
unsigned long private_value
the value passed to each kcontrol’s private_value field
struct snd_pcm_chmap ** info_ret
store struct snd_pcm_chmap instance if non-NULL

Description

Create channel-mapping control elements assigned to the given PCM stream(s).

Return

Zero if successful, or a negative error value.

int snd_hwdep_new(struct snd_card * card, char * id, int device, struct snd_hwdep ** rhwdep)

create a new hwdep instance

Parameters

struct snd_card * card
the card instance
char * id
the id string
int device