pandora-kernel.git
14 years agoMerge branch 'iop-raid6' into async-tx-next
Dan Williams [Wed, 9 Sep 2009 00:53:57 +0000 (17:53 -0700)]
Merge branch 'iop-raid6' into async-tx-next

14 years agoI/OAT: Convert to PCI_VDEVICE()
Roland Dreier [Wed, 9 Sep 2009 00:43:03 +0000 (17:43 -0700)]
I/OAT: Convert to PCI_VDEVICE()

Trivial cleanup to make the PCI ID table easier to read.

[dan.j.williams@intel.com: extended to v3.2 devices]
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoAdd MODULE_DEVICE_TABLE() so ioatdma module is autoloaded
Roland Dreier [Wed, 9 Sep 2009 00:43:03 +0000 (17:43 -0700)]
Add MODULE_DEVICE_TABLE() so ioatdma module is autoloaded

The ioatdma module is missing aliases for the PCI devices it supports,
so it is not autoloaded on boot.  Add a MODULE_DEVICE_TABLE() to get
these aliases.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: segregate raid engines
Dan Williams [Wed, 9 Sep 2009 00:43:02 +0000 (17:43 -0700)]
ioat3: segregate raid engines

The cleanup routine for the raid cases imposes extra checks for handling
raid descriptors and extended descriptors.  If the channel does not
support raid it can avoid this extra overhead by using the ioat2 cleanup
path.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: ioat3.2 pci ids for Jasper Forest
Tom Picard [Wed, 9 Sep 2009 00:43:01 +0000 (17:43 -0700)]
ioat3: ioat3.2 pci ids for Jasper Forest

Jasper Forest introduces raid offload support via ioat3.2 support.  When
raid offload is enabled two (out of 8 channels) will report raid5/raid6
offload capabilities.  The remaining channels will only report ioat3.0
capabilities (memcpy).

Signed-off-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: interrupt descriptor support
Dan Williams [Wed, 9 Sep 2009 00:43:00 +0000 (17:43 -0700)]
ioat3: interrupt descriptor support

The async_tx api uses the DMA_INTERRUPT operation type to terminate a
chain of issued operations with a callback routine.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: support xor via pq descriptors
Dan Williams [Wed, 9 Sep 2009 00:43:00 +0000 (17:43 -0700)]
ioat3: support xor via pq descriptors

If a platform advertises pq capabilities, but not xor, then use
ioat3_prep_pqxor and ioat3_prep_pqxor_val to simulate xor support.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: pq support
Dan Williams [Wed, 9 Sep 2009 00:42:59 +0000 (17:42 -0700)]
ioat3: pq support

ioat3.2 adds support for raid6 syndrome generation (xor sum of galois
field multiplication products) using up to 8 sources.  It can also
perform an pq-zero-sum operation to validate whether the syndrome for a
given set of sources matches a previously computed syndrome.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: xor self test
Dan Williams [Wed, 9 Sep 2009 00:42:58 +0000 (17:42 -0700)]
ioat3: xor self test

This adds a hardware specific self test to be called from ioat_probe.
In the ioat3 case we will have tests for all the different raid
operations, while ioat1 and ioat2 will continue to just test memcpy.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: xor support
Dan Williams [Wed, 9 Sep 2009 00:42:57 +0000 (17:42 -0700)]
ioat3: xor support

ioat3.2 adds xor offload support for up to 8 sources.  It can also
perform an xor-zero-sum operation to validate whether all given sources
sum to zero, without writing to a destination.  Xor descriptors differ
from memcpy in that one operation may require multiple descriptors
depending on the number of sources.  When the number of sources exceeds
5 an extended descriptor is needed.  These descriptors need to be
accounted for when updating the DMA_COUNT register.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: enable dca for completion writes
Dan Williams [Wed, 9 Sep 2009 00:42:57 +0000 (17:42 -0700)]
ioat3: enable dca for completion writes

Tag completion writes for direct cache access to reduce the latency of
checking for descriptor completions.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: add 'ioat' sysfs attributes
Dan Williams [Wed, 9 Sep 2009 00:42:56 +0000 (17:42 -0700)]
ioat: add 'ioat' sysfs attributes

Export driver attributes for diagnostic purposes:
'ring_size': total number of descriptors available to the engine
'ring_active': number of descriptors in-flight
'capabilities': supported operation types for this channel
'version': Intel(R) QuickData specfication revision

This also allows some chattiness to be removed from the driver startup
as this information is now available via sysfs.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: split ioat3 support to its own file, add memset
Dan Williams [Wed, 9 Sep 2009 00:42:55 +0000 (17:42 -0700)]
ioat3: split ioat3 support to its own file, add memset

Up until this point the driver for Intel(R) QuickData Technology
engines, specification versions 2 and 3, were mostly identical save for
a few quirks.  Version 3.2 hardware adds many new capabilities (like
raid offload support) requiring some infrastructure that is not relevant
for v2.  For better code organization of the new funcionality move v3
and v3.2 support to its own file dma_v3.c, and export some routines from
the base files (dma.c and dma_v2.c) that can be reused directly.

The first new capability included in this code reorganization is support
for v3.2 memset operations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat3: hardware version 3.2 register / descriptor definitions
Dan Williams [Wed, 9 Sep 2009 00:42:54 +0000 (17:42 -0700)]
ioat3: hardware version 3.2 register / descriptor definitions

ioat3.2 adds raid5 and raid6 offload capabilities.

Signed-off-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat2+: add fence support
Dan Williams [Wed, 9 Sep 2009 00:42:53 +0000 (17:42 -0700)]
ioat2+: add fence support

In preparation for adding more operation types to the ioat3 path the
driver needs to honor the DMA_PREP_FENCE flag.  For example the async_tx api
will hand xor->memcpy->xor chains to the driver with the 'fence' flag set on
the first xor and the memcpy operation.  This flag in turn sets the 'fence'
flag in the descriptor control field telling the hardware that future
descriptors in the chain depend on the result of the current descriptor, so
wait for all writes to complete before starting the next operation.

Note that ioat1 does not prefetch the descriptor chain, so does not
require/support fenced operations.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agodmaengine, async_tx: support alignment checks
Dan Williams [Wed, 9 Sep 2009 00:42:53 +0000 (17:42 -0700)]
dmaengine, async_tx: support alignment checks

Some engines have transfer size and address alignment restrictions.  Add
a per-operation alignment property to struct dma_device that the async
routines and dmatest can use to check alignment capabilities.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agodmaengine: cleanup unused transaction types
Dan Williams [Wed, 9 Sep 2009 00:42:52 +0000 (17:42 -0700)]
dmaengine: cleanup unused transaction types

No drivers currently implement these operation types, so they can be
deleted.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agodmaengine, async_tx: add a "no channel switch" allocator
Dan Williams [Wed, 9 Sep 2009 00:42:51 +0000 (17:42 -0700)]
dmaengine, async_tx: add a "no channel switch" allocator

Channel switching is problematic for some dmaengine drivers as the
architecture precludes separating the ->prep from ->submit.  In these
cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
the async_tx allocator to only return channels that support all of the
required asynchronous operations.

For example MD_RAID456=y selects support for asynchronous xor, xor
validate, pq, pq validate, and memcpy.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
quickly locate compatible channels with the guarantee that dependency
chains will remain on one channel.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
channels that lead to operation chains that need to cross channel
boundaries using the async_tx channel switch capability.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agodmaengine: add fence support
Dan Williams [Wed, 9 Sep 2009 00:42:50 +0000 (17:42 -0700)]
dmaengine: add fence support

Some engines optimize operation by reading ahead in the descriptor chain
such that descriptor2 may start execution before descriptor1 completes.
If descriptor2 depends on the result from descriptor1 then a fence is
required (on descriptor2) to disable this optimization.  The async_tx
api could implicitly identify dependencies via the 'depend_tx'
parameter, but that would constrain cases where the dependency chain
only specifies a completion order rather than a data dependency.  So,
provide an ASYNC_TX_FENCE to explicitly identify data dependencies.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoMerge branch 'md-raid6-accel' into ioat3.2
Dan Williams [Wed, 9 Sep 2009 00:42:29 +0000 (17:42 -0700)]
Merge branch 'md-raid6-accel' into ioat3.2

Conflicts:
include/linux/dmaengine.h

14 years agonet_dma: poll for a descriptor after allocation failure
Dan Williams [Tue, 8 Sep 2009 19:02:15 +0000 (12:02 -0700)]
net_dma: poll for a descriptor after allocation failure

Handle descriptor allocation failures by polling for a descriptor.  The
driver will force forward progress when polled.  In the best case this
polling interval will be the time it takes for one dma memcpy
transaction to complete.  In the worst case, channel hang, we will need
to wait 100ms for the cleanup watchdog to fire (ioatdma driver).

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat2,3: dynamically resize descriptor ring
Dan Williams [Tue, 8 Sep 2009 19:02:01 +0000 (12:02 -0700)]
ioat2,3: dynamically resize descriptor ring

Increment the allocation order of the descriptor ring every time we run
out of descriptors up to a maximum of allocation order specified by the
module parameter 'ioat_max_alloc_order'.  After each idle period
decrement the allocation order to a minimum order of
'ioat_ring_alloc_order' (i.e. the default ring size, tunable as a module
parameter).

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: switch watchdog and reset handler from workqueue to timer
Dan Williams [Tue, 8 Sep 2009 19:01:49 +0000 (12:01 -0700)]
ioat: switch watchdog and reset handler from workqueue to timer

In order to support dynamic resizing of the descriptor ring or polling
for a descriptor in the presence of a hung channel the reset handler
needs to make progress while in a non-preemptible context.  The current
workqueue implementation precludes polling channel reset completion
under spin_lock().

This conversion also allows us to return to opportunistic cleanup in the
ioat2 case as the timer implementation guarantees at least one cleanup
after every descriptor is submitted.  This means the worst case
completion latency becomes the timer frequency (for exceptional
circumstances), but with the benefit of avoiding busy waiting when the
lock is contended.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat1: trim ioat_dma_desc_sw
Dan Williams [Tue, 8 Sep 2009 19:01:38 +0000 (12:01 -0700)]
ioat1: trim ioat_dma_desc_sw

Save 4 bytes per software descriptor by transmitting tx_cnt in an unused
portion of the hardware descriptor.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: ___devinit annotate the initialization paths
Dan Williams [Tue, 8 Sep 2009 19:01:30 +0000 (12:01 -0700)]
ioat: ___devinit annotate the initialization paths

Mark all single use initialization routines with __devinit.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: preserve chanctrl bits when re-arming interrupts
Dan Williams [Tue, 8 Sep 2009 19:01:21 +0000 (12:01 -0700)]
ioat: preserve chanctrl bits when re-arming interrupts

The register write in ioat_dma_cleanup_tasklet is unfortunate in two
ways:
1/ It clears the extra 'enable' bits that we set at alloc_chan_resources time
2/ It gives the impression that it disables interrupts when it is in
   fact re-arming interrupts

[ Impact: fix, persist the value of the chanctrl register when re-arming ]

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: ignore reserved bits for chancnt and xfercap
Dan Williams [Tue, 8 Sep 2009 19:01:14 +0000 (12:01 -0700)]
ioat: ignore reserved bits for chancnt and xfercap

Don't trust that the reserved bits are always zero, also sanity check
the returned value.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: cleanup completion status reads
Dan Williams [Tue, 8 Sep 2009 19:01:04 +0000 (12:01 -0700)]
ioat: cleanup completion status reads

The cleanup path makes an effort to only perform an atomic read of the
64-bit completion address.  However in the 32-bit case it does not
matter if we read the upper-32 and lower-32 non-atomically because the
upper-32 will always be zero.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: add some dev_dbg() calls
Dan Williams [Tue, 8 Sep 2009 19:00:55 +0000 (12:00 -0700)]
ioat: add some dev_dbg() calls

Provide some output for debugging the driver.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat1: kill unused unmap parameters
Dan Williams [Tue, 8 Sep 2009 19:00:46 +0000 (12:00 -0700)]
ioat1: kill unused unmap parameters

The unified ioat1/ioat2 ioat_dma_unmap() implementation derives the
source and dest addresses from the unmap descriptor.  There is no longer
a need to track this information in struct ioat_desc_sw.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat2,3: convert to a true ring buffer
Dan Williams [Wed, 26 Aug 2009 20:01:44 +0000 (13:01 -0700)]
ioat2,3: convert to a true ring buffer

Replace the current linked list munged into a ring with a native ring
buffer implementation.  The benefit of this approach is reduced overhead
as many parameters can be derived from ring position with simple pointer
comparisons and descriptor allocation/freeing becomes just a
manipulation of head/tail pointers.

It requires a contiguous allocation for the software descriptor
information.

Since this arrangement is significantly different from the ioat1 chain,
move ioat2,3 support into its own file and header.  Common routines are
exported from driver/dma/ioat/dma.[ch].

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: prepare the code for ioat[12]_dma_chan split
Dan Williams [Tue, 28 Jul 2009 21:44:50 +0000 (14:44 -0700)]
ioat: prepare the code for ioat[12]_dma_chan split

Prepare the code for the conversion of the ioat2 linked-list-ring into a
native ring buffer.  After this conversion ioat2 channels will share
less of the ioat1 infrastructure, but there will still be places where
sharing is possible.  struct ioat_chan_common is created to house the
channel attributes that will remain common between ioat1 and ioat2
channels.

For every routine that accesses both common and hardware specific fields
the old unified 'ioat_chan' pointer is split into an 'ioat' and  'chan'
pointer.  Where 'chan' references common fields and 'ioat' the
hardware/version specific.

[ Impact: pure structure member movement/variable renames, no logic changes ]

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: fix self test interrupts
Dan Williams [Tue, 28 Jul 2009 21:44:05 +0000 (14:44 -0700)]
ioat: fix self test interrupts

If a callback is to be attached to a descriptor the channel needs to
know at ->prep time so it can set the interrupt enable bit.  This is in
preparation for moving descriptor ioat2 descriptor preparation from
->submit to ->prep.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat1: move descriptor allocation from submit to prep
Dan Williams [Tue, 28 Jul 2009 21:44:04 +0000 (14:44 -0700)]
ioat1: move descriptor allocation from submit to prep

The async_tx api assumes that after a successful ->prep a subsequent
->submit will not fail due to a lack of resources.

This also fixes a bug in the allocation failure case.  Previously the
descriptors allocated prior to the allocation failure would not be
returned to the free list.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: define descriptor control bit-field
Dan Williams [Tue, 28 Jul 2009 21:44:04 +0000 (14:44 -0700)]
ioat: define descriptor control bit-field

This cleans up a mess of and'ing and or'ing bit definitions, and allows
simple assignments from the specified dma_ctrl_flags parameter.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: fix type mismatch for ->dmacount
Dan Williams [Tue, 28 Jul 2009 21:44:04 +0000 (14:44 -0700)]
ioat: fix type mismatch for ->dmacount

->dmacount tracks the sequence number of active descriptors.  It is
written to the DMACOUNT register to update the channel's view of pending
descriptors in the chain.  The register is 16-bits so ->dmacount should
be unsigned and 16-bit as well.  Also modify ->desccount to maintain
alignment.

This was never a problem in practice because we never compared dmacount
values, but this is a bug waiting to happen.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: split ioat_dma_probe into core/version-specific routines
Dan Williams [Tue, 28 Jul 2009 21:42:38 +0000 (14:42 -0700)]
ioat: split ioat_dma_probe into core/version-specific routines

Towards the removal of ioatdma_device.version split the initialization
path into distinct versions.  This conversion:
1/ moves version specific probe code to version specific routines
2/ removes the need for ioat_device
3/ turns off the ioat1 msi quirk if the device is reinitialized for intx

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: kill function prototype ifdef guards
Dan Williams [Tue, 28 Jul 2009 21:42:32 +0000 (14:42 -0700)]
ioat: kill function prototype ifdef guards

The only .c files that utilize these protected prototypes depend on
CONFIG_INTEL_IOATDMA=y, so there is no value gained in providing empty
prototypes.

[ Impact: pure cleanup ]

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: cleanup some long deref chains and 80 column collisions
Dan Williams [Tue, 28 Jul 2009 21:33:42 +0000 (14:33 -0700)]
ioat: cleanup some long deref chains and 80 column collisions

* reduce device->common. to dma-> in ioat_dma_{probe,remove,selftest}
* ioat_lookup_chan_by_index to ioat_chan_by_index
* multi-line function definitions
* ioat_desc_sw.async_tx to ioat_desc_sw.txd
* desc->txd. to tx-> in cleanup routine

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: convert ioat_probe to pcim/devm
Dan Williams [Wed, 9 Sep 2009 00:29:44 +0000 (17:29 -0700)]
ioat: convert ioat_probe to pcim/devm

The driver currently duplicates much of what these routines offer, so
just use the common code.  For example ->irq_mode tracks what interrupt
mode was initialized, which duplicates the ->msix_enabled and
->msi_enabled handling in pcim_release.

This also adds a check to the return value of dma_async_device_register,
which can fail.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: move definitions to dma.h
Dan Williams [Wed, 9 Sep 2009 00:29:02 +0000 (17:29 -0700)]
ioat: move definitions to dma.h

Some of these defines may be useful outside of dma.c and the header is
private so there are no namespace pollution concerns.

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid456: distribute raid processing over multiple cores
Dan Williams [Sun, 30 Aug 2009 02:13:13 +0000 (19:13 -0700)]
md/raid456: distribute raid processing over multiple cores

Now that the resources to handle stripe_head operations are allocated
percpu it is possible for raid5d to distribute stripe handling over
multiple cores.  This conversion also adds a call to cond_resched() in
the non-multicore case to prevent one core from getting monopolized for
raid operations.

Cc: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: remove synchronous infrastructure
Yuri Tikhonov [Sun, 30 Aug 2009 02:13:13 +0000 (19:13 -0700)]
md/raid6: remove synchronous infrastructure

These routines have been replaced by there asynchronous counterparts.

Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: asynchronous handle_stripe6
Yuri Tikhonov [Sun, 30 Aug 2009 02:13:13 +0000 (19:13 -0700)]
md/raid6: asynchronous handle_stripe6

1/ Use STRIPE_OP_BIOFILL to offload completion of read requests to
   raid_run_ops
2/ Implement a handler for sh->reconstruct_state similar to the raid5 case
   (adds handling of Q parity)
3/ Prevent handle_parity_checks6 from running concurrently with 'compute'
   operations
4/ Hook up raid_run_ops

Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: asynchronous handle_parity_check6
Dan Williams [Tue, 14 Jul 2009 20:40:57 +0000 (13:40 -0700)]
md/raid6: asynchronous handle_parity_check6

[ Based on an original patch by Yuri Tikhonov ]

Implement the state machine for handling the RAID-6 parities check and
repair functionality.  Note that the raid6 case does not need to check
for new failures, like raid5, as it will always writeback the correct
disks.  The raid5 case can be updated to check zero_sum_result to avoid
getting confused by new failures rather than retrying the entire check
operation.

Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: asynchronous handle_stripe_dirtying6
Yuri Tikhonov [Sun, 30 Aug 2009 02:13:12 +0000 (19:13 -0700)]
md/raid6: asynchronous handle_stripe_dirtying6

In the synchronous implementation of stripe dirtying we processed a
degraded stripe with one call to handle_stripe_dirtying6().  I.e.
compute the missing blocks from the other drives, then copy in the new
data and reconstruct the parities.

In the asynchronous case we do not perform stripe operations directly.
Instead, operations are scheduled with flags to be later serviced by
raid_run_ops.  So, for the degraded case the final reconstruction step
can only be carried out after all blocks have been brought up to date by
being read, or computed.  Like the raid5 case schedule_reconstruction()
sets STRIPE_OP_RECONSTRUCT to request a parity generation pass and
through operation chaining can handle compute and reconstruct in a
single raid_run_ops pass.

[dan.j.williams@intel.com: fixup handle_stripe_dirtying6 gating]
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: asynchronous handle_stripe_fill6
Yuri Tikhonov [Sun, 30 Aug 2009 02:13:12 +0000 (19:13 -0700)]
md/raid6: asynchronous handle_stripe_fill6

Modify handle_stripe_fill6 to work asynchronously by introducing
fetch_block6 as the raid6 analog of fetch_block5 (schedule compute
operations for missing/out-of-sync disks).

[dan.j.williams@intel.com: compute D+Q in one pass]
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid5,6: common schedule_reconstruction for raid5/6
Yuri Tikhonov [Sun, 30 Aug 2009 02:13:12 +0000 (19:13 -0700)]
md/raid5,6: common schedule_reconstruction for raid5/6

Extend schedule_reconstruction5 for reuse by the raid6 path.  Add
support for generating Q and BUG() if a request is made to perform
'prexor'.

Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: asynchronous raid6 operations
Dan Williams [Tue, 14 Jul 2009 20:40:19 +0000 (13:40 -0700)]
md/raid6: asynchronous raid6 operations

[ Based on an original patch by Yuri Tikhonov ]

The raid_run_ops routine uses the asynchronous offload api and
the stripe_operations member of a stripe_head to carry out xor+pq+copy
operations asynchronously, outside the lock.

The operations performed by RAID-6 are the same as in the RAID-5 case
except for no support of STRIPE_OP_PREXOR operations. All the others
are supported:
STRIPE_OP_BIOFILL
 - copy data into request buffers to satisfy a read request
STRIPE_OP_COMPUTE_BLK
 - generate missing blocks (1 or 2) in the cache from the other blocks
STRIPE_OP_BIODRAIN
 - copy data out of request buffers to satisfy a write request
STRIPE_OP_RECONSTRUCT
 - recalculate parity for new data that has entered the cache
STRIPE_OP_CHECK
 - verify that the parity is correct

The flow is the same as in the RAID-5 case, and reuses some routines, namely:
1/ ops_complete_postxor (renamed to ops_complete_reconstruct)
2/ ops_complete_compute (updated to set up to 2 targets uptodate)
3/ ops_run_check (renamed to ops_run_check_p for xor parity checks)

[neilb@suse.de: fixes to get it to pass mdadm regression suite]
Reviewed-by: Andre Noll <maan@systemlinux.org>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid5: factor out mark_uptodate from ops_complete_compute5
Dan Williams [Sun, 30 Aug 2009 02:13:11 +0000 (19:13 -0700)]
md/raid5: factor out mark_uptodate from ops_complete_compute5

ops_complete_compute5 can be reused in the raid6 path if it is updated to
generically handle a second target.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoiop-adma: P+Q self test
Dan Williams [Sun, 30 Aug 2009 02:12:40 +0000 (19:12 -0700)]
iop-adma: P+Q self test

Even though the intent is to extend dmatest with P+Q tests there is
still value in having an always-on sanity check to prevent an
unintentionally broken driver from registering.

This depends on raid6_pq.ko for verification, the side effect being that
PQ capable channels will fail to register when raid6 is disabled.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoiop-adma: P+Q support for iop13xx adma engines
Dan Williams [Fri, 28 Aug 2009 21:32:04 +0000 (14:32 -0700)]
iop-adma: P+Q support for iop13xx adma engines

iop33x support is not included because that engine is a bit more awkward
to handle in that it can either be in xor mode or pq mode.  The
dmaengine/async_tx layers currently only comprehend static capabilities.

Note iop13xx does not support hardware PQ continuation so the driver
must handle the DMA_PREP_CONTINUE flag for operations across > 16
sources. From the comment for dma_maxpq:

/* When an engine does not support native continuation we need 3 extra
 * source slots to reuse P and Q with the following coefficients:
 * 1/ {00} * P : remove P from Q', but use it as a source for P'
 * 2/ {01} * Q : use Q to continue Q' calculation
 * 3/ {00} * Q : subtract Q from P' to cancel (2)
 */

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoiop-adma: fix lockdep false positive
Dan Williams [Tue, 14 Jul 2009 20:38:29 +0000 (13:38 -0700)]
iop-adma: fix lockdep false positive

lockdep correctly identifies a potential recursive locking case for
iop_chan->lock, but in the dependency submission case we expect that the same
class will be acquired for both the parent dependency and the child channel.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoiop-adma: cleanup iop_adma_run_tx_complete_actions
Dan Williams [Sun, 30 Aug 2009 02:12:39 +0000 (19:12 -0700)]
iop-adma: cleanup iop_adma_run_tx_complete_actions

Replace 'desc->async_tx.' with 'tx->'

[ Impact: pure cleanup ]

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoasync_tx: raid6 recovery self test
Dan Williams [Tue, 14 Jul 2009 19:20:37 +0000 (12:20 -0700)]
async_tx: raid6 recovery self test

Port drivers/md/raid6test/test.c to use the async raid6 recovery
routines.  This is meant as a unit test for raid6 acceleration drivers.  In
addition to the 16-drive test case this implements tests for the 4-disk and
5-disk special cases (dma devices can not generically handle less than 2
sources), and adds a test for the D+Q case.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agodmatest: add pq support
Dan Williams [Sun, 30 Aug 2009 02:09:27 +0000 (19:09 -0700)]
dmatest: add pq support

Test raid6 p+q operations with a simple "always multiply by 1" q
calculation to fit into dmatest's current destination verification
scheme.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoasync_tx: add support for asynchronous RAID6 recovery operations
Dan Williams [Tue, 14 Jul 2009 19:20:37 +0000 (12:20 -0700)]
async_tx: add support for asynchronous RAID6 recovery operations

 async_raid6_2data_recov() recovers two data disk failures

 async_raid6_datap_recov() recovers a data disk and the P disk

These routines are a port of the synchronous versions found in
drivers/md/raid6recov.c.  The primary difference is breaking out the xor
operations into separate calls to async_xor.  Two helper routines are
introduced to perform scalar multiplication where needed.
async_sum_product() multiplies two sources by scalar coefficients and
then sums (xor) the result.  async_mult() simply multiplies a single
source by a scalar.

This implemention also includes, in contrast to the original
synchronous-only code, special case handling for the 4-disk and 5-disk
array cases.  In these situations the default N-disk algorithm will
present 0-source or 1-source operations to dma devices.  To cover for
dma devices where the minimum source count is 2 we implement 4-disk and
5-disk handling in the recovery code.

[ Impact: asynchronous raid6 recovery routines for 2data and datap cases ]

Cc: Yuri Tikhonov <yur@emcraft.com>
Cc: Ilya Yanok <yanok@emcraft.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoasync_tx: add support for asynchronous GF multiplication
Dan Williams [Tue, 14 Jul 2009 19:20:36 +0000 (12:20 -0700)]
async_tx: add support for asynchronous GF multiplication

[ Based on an original patch by Yuri Tikhonov ]

This adds support for doing asynchronous GF multiplication by adding
two additional functions to the async_tx API:

 async_gen_syndrome() does simultaneous XOR and Galois field
    multiplication of sources.

 async_syndrome_val() validates the given source buffers against known P
    and Q values.

When a request is made to run async_pq against more than the hardware
maximum number of supported sources we need to reuse the previous
generated P and Q values as sources into the next operation.  Care must
be taken to remove Q from P' and P from Q'.  For example to perform a 5
source pq op with hardware that only supports 4 sources at a time the
following approach is taken:

p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08}))
p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10}))

p' = p + q + q + src4 = p + src4
q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4

Note: 4 is the minimum acceptable maxpq otherwise we punt to
synchronous-software path.

The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as
sources (in the above manner) and fill the remaining slots up to maxpq
with the new sources/coefficients.

Note1: Some devices have native support for P+Q continuation and can skip
this extra work.  Devices with this capability can advertise it with
dma_set_maxpq.  It is up to each driver how to handle the
DMA_PREP_CONTINUE flag.

Note2: The api supports disabling the generation of P when generating Q,
this is ignored by the synchronous path but is implemented by some dma
devices to save unnecessary writes.  In this case the continuation
algorithm is simplified to only reuse Q as a source.

Cc: H. Peter Anvin <hpa@zytor.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoasync_tx: remove walk of tx->parent chain in dma_wait_for_async_tx
Dan Williams [Tue, 14 Jul 2009 19:19:02 +0000 (12:19 -0700)]
async_tx: remove walk of tx->parent chain in dma_wait_for_async_tx

We currently walk the parent chain when waiting for a given tx to
complete however this walk may race with the driver cleanup routine.
The routines in async_raid6_recov.c may fall back to the synchronous
path at any point so we need to be prepared to call async_tx_quiesce()
(which calls  dma_wait_for_async_tx).  To remove the ->parent walk we
guarantee that every time a dependency is attached ->issue_pending() is
invoked, then we can simply poll the initial descriptor until
completion.

This also allows for a lighter weight 'issue pending' implementation as
there is no longer a requirement to iterate through all the channels'
->issue_pending() routines as long as operations have been submitted in
an ordered chain.  async_tx_issue_pending() is added for this case.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoasync_tx: kill needless module_{init|exit}
Dan Williams [Sun, 30 Aug 2009 02:09:26 +0000 (19:09 -0700)]
async_tx: kill needless module_{init|exit}

If module_init and module_exit are nops then neither need to be defined.

[ Impact: pure cleanup ]

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoasync_tx: add sum check flags
Dan Williams [Sun, 30 Aug 2009 02:09:26 +0000 (19:09 -0700)]
async_tx: add sum check flags

Replace the flat zero_sum_result with a collection of flags to contain
the P (xor) zero-sum result, and the soon to be utilized Q (raid6 reed
solomon syndrome) zero-sum result.  Use the SUM_CHECK_ namespace instead
of DMA_ since these flags will be used on non-dma-zero-sum enabled
platforms.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid5,6: add percpu scribble region for buffer lists
Dan Williams [Tue, 14 Jul 2009 18:50:52 +0000 (11:50 -0700)]
md/raid5,6: add percpu scribble region for buffer lists

Use percpu memory rather than stack for storing the buffer lists used in
parity calculations.  Include space for dma address conversions and pass
that to async_tx via the async_submit_ctl.scribble pointer.

[ Impact: move memory pressure from stack to heap ]

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: move the spare page to a percpu allocation
Dan Williams [Tue, 14 Jul 2009 18:48:22 +0000 (11:48 -0700)]
md/raid6: move the spare page to a percpu allocation

In preparation for asynchronous handling of raid6 operations move the
spare page to a percpu allocation to allow multiple simultaneous
synchronous raid6 recovery operations.

Make this allocation cpu hotplug aware to maximize allocation
efficiency.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoioat: move to drivers/dma/ioat/
Dan Williams [Tue, 28 Jul 2009 21:32:12 +0000 (14:32 -0700)]
ioat: move to drivers/dma/ioat/

When first created the ioat driver was the only inhabitant of
drivers/dma/.  Now, it is the only multi-file (more than a .c and a .h)
driver in the directory.  Moving it to an ioat/ subdirectory allows the
naming convention to be cleaned up, and allows for future splitting of
the source files by hardware version (v1, v2, and v3).

Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agomd/raid6: release spare page at ->stop()
Dan Williams [Tue, 14 Jul 2009 18:48:16 +0000 (11:48 -0700)]
md/raid6: release spare page at ->stop()

Add missing call to safe_put_page from stop() by unifying open coded
raid5_conf_t de-allocation under free_conf().

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
14 years agoLinux 2.6.30 v2.6.30
Linus Torvalds [Wed, 10 Jun 2009 03:05:27 +0000 (20:05 -0700)]
Linux 2.6.30

14 years agochar: mxser, fix ISA board lookup
Peter Botha [Wed, 10 Jun 2009 00:16:32 +0000 (17:16 -0700)]
char: mxser, fix ISA board lookup

There's a bug in the mxser kernel module that still appears in the
2.6.29.4 kernel.

mxser_get_ISA_conf takes a ioaddress as its first argument, by passing the
not of the ioaddr, you're effectively passing 0 which means it won't be
able to talk to an ISA card.  I have tested this, and removing the !
fixes the problem.

Cc: "Peter Botha" <peterb@goldcircle.co.za>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Acked-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
14 years agojbd: fix race in buffer processing in commit code
Jan Kara [Tue, 9 Jun 2009 23:26:26 +0000 (16:26 -0700)]
jbd: fix race in buffer processing in commit code

In commit code, we scan buffers attached to a transaction.  During this
scan, we sometimes have to drop j_list_lock and then we recheck whether
the journal buffer head didn't get freed by journal_try_to_free_buffers().
 But checking for buffer_jbd(bh) isn't enough because a new journal head
could get attached to our buffer head.  So add a check whether the journal
head remained the same and whether it's still at the same transaction and
list.

This is a nasty bug and can cause problems like memory corruption (use after
free) or trigger various assertions in JBD code (observed).

Signed-off-by: Jan Kara <jack@suse.cz>
Cc: <stable@kernel.org>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
14 years agoautofs4: remove hashed check in validate_wait()
Ian Kent [Tue, 9 Jun 2009 23:26:24 +0000 (16:26 -0700)]
autofs4: remove hashed check in validate_wait()

The recent ->lookup() deadlock correction required the directory inode
mutex to be dropped while waiting for expire completion.  We were
concerned about side effects from this change and one has been identified.

I saw several error messages.

They cause autofs to become quite confused and don't really point to the
actual problem.

Things like:

handle_packet_missing_direct:1376: can't find map entry for (43,1827932)

which is usually totally fatal (although in this case it wouldn't be
except that I treat is as such because it normally is).

do_mount_direct: direct trigger not valid or already mounted
/test/nested/g3c/s1/ss1

which is recoverable, however if this problem is at play it can cause
autofs to become quite confused as to the dependencies in the mount tree
because mount triggers end up mounted multiple times.  It's hard to
accurately check for this over mounting case and automount shouldn't need
to if the kernel module is doing its job.

There was one other message, similar in consequence of this last one but I
can't locate a log example just now.

When checking if a mount has already completed prior to adding a new mount
request to the wait queue we check if the dentry is hashed and, if so, if
it is a mount point.  But, if a mount successfully completed while we
slept on the wait queue mutex the dentry must exist for the mount to have
completed so the test is not really needed.

Mounts can also be done on top of a global root dentry, so for the above
case, where a mount request completes and the wait queue entry has already
been removed, the hashed test returning false can cause an incorrect
callback to the daemon.  Also, d_mountpoint() is not sufficient to check
if a mount has completed for the multi-mount case when we don't have a
real mount at the base of the tree.

Signed-off-by: Ian Kent <raven@themaw.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
14 years agoshm: fix unused warnings on nommu
Mike Frysinger [Tue, 9 Jun 2009 23:26:23 +0000 (16:26 -0700)]
shm: fix unused warnings on nommu

The massive nommu update (8feae131) resulted in these warnings:
ipc/shm.c: In function `sys_shmdt':
ipc/shm.c:974: warning: unused variable `size'
ipc/shm.c:972: warning: unused variable `next'

Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
14 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus
Linus Torvalds [Tue, 9 Jun 2009 15:48:32 +0000 (08:48 -0700)]
Merge git://git./linux/kernel/git/rusty/linux-2.6-for-linus

* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus:
  kvm: fix kvm reboot crash when MAXSMP is used
  cpumask: alloc zeroed cpumask for static cpumask_var_ts
  cpumask: introduce zalloc_cpumask_var

14 years agoMerge branch 'for-linus' of git://git.kernel.dk/linux-2.6-block
Linus Torvalds [Tue, 9 Jun 2009 15:47:43 +0000 (08:47 -0700)]
Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-block

* 'for-linus' of git://git.kernel.dk/linux-2.6-block:
  bsg: setting rq->bio to NULL

14 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6
Linus Torvalds [Tue, 9 Jun 2009 15:47:27 +0000 (08:47 -0700)]
Merge git://git./linux/kernel/git/davem/net-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6:
  cls_cgroup: Fix oops when user send improperly 'tc filter add' request
  r8169: fix crash when large packets are received

14 years agoMerge branch 'for-linus' of git://neil.brown.name/md
Linus Torvalds [Tue, 9 Jun 2009 15:41:22 +0000 (08:41 -0700)]
Merge branch 'for-linus' of git://neil.brown.name/md

* 'for-linus' of git://neil.brown.name/md:
  md/raid5: fix bug in reshape code when chunk_size decreases.
  md/raid5 - avoid deadlocks in get_active_stripe during reshape
  md/raid5: use conf->raid_disks in preference to mddev->raid_disk

14 years agobsg: setting rq->bio to NULL
FUJITA Tomonori [Tue, 9 Jun 2009 13:17:37 +0000 (15:17 +0200)]
bsg: setting rq->bio to NULL

Due to commit 1cd96c242a829d52f7a5ae98f554ca9775429685 ("block: WARN
in __blk_put_request() for potential bio leak"), BSG SMP requests get
the false warnings:

WARNING: at block/blk-core.c:1068 __blk_put_request+0x52/0xc0()

This sets rq->bio to NULL to avoid that false warnings.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
14 years agokvm: fix kvm reboot crash when MAXSMP is used
Avi Kivity [Sat, 6 Jun 2009 21:52:35 +0000 (14:52 -0700)]
kvm: fix kvm reboot crash when MAXSMP is used

one system was found there is crash during reboot then kvm/MAXSMP
Sending all processes the KILL signal...                              done
Please stand by while rebooting the system...
[ 1721.856538] md: stopping all md devices.
[ 1722.852139] kvm: exiting hardware virtualization
[ 1722.854601] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 1722.872219] IP: [<ffffffff8102c6b6>] hardware_disable+0x4c/0xb4
[ 1722.877955] PGD 0
[ 1722.880042] Oops: 0000 [#1] SMP
[ 1722.892548] last sysfs file: /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:2:0/0:2:0:0/vendor
[ 1722.900977] CPU 9
[ 1722.912606] Modules linked in:
[ 1722.914226] Pid: 0, comm: swapper Not tainted 2.6.30-rc7-tip-01843-g2305324-dirty #299 ...
[ 1722.932589] RIP: 0010:[<ffffffff8102c6b6>]  [<ffffffff8102c6b6>] hardware_disable+0x4c/0xb4
[ 1722.942709] RSP: 0018:ffffc900010b6ed8  EFLAGS: 00010046
[ 1722.956121] RAX: 0000000000000000 RBX: ffffc9000e253140 RCX: 0000000000000009
[ 1722.972202] RDX: 000000000000b020 RSI: ffffc900010c3220 RDI: ffffffffffffd790
[ 1722.977399] RBP: ffffc900010b6f08 R08: 0000000000000000 R09: 0000000000000000
[ 1722.995149] R10: 00000000000004b8 R11: 966912b6c78fddbd R12: 0000000000000009
[ 1723.011551] R13: 000000000000b020 R14: 0000000000000009 R15: 0000000000000000
[ 1723.019898] FS:  0000000000000000(0000) GS:ffffc900010b3000(0000) knlGS:0000000000000000
[ 1723.034389] CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
[ 1723.041164] CR2: 0000000000000000 CR3: 0000000001001000 CR4: 00000000000006e0
[ 1723.056192] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1723.072546] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1723.080562] Process swapper (pid: 0, threadinfo ffff88107e464000, task ffff88047e5a2550)
[ 1723.096144] Stack:
[ 1723.099071]  0000000000000046 ffffc9000e253168 966912b6c78fddbd ffffc9000e253140
[ 1723.115471]  ffff880c7d4304d0 ffffc9000e253168 ffffc900010b6f28 ffffffff81011022
[ 1723.132428]  ffffc900010b6f48 966912b6c78fddbd ffffc900010b6f48 ffffffff8100b83b
[ 1723.141973] Call Trace:
[ 1723.142981]  <IRQ> <0> [<ffffffff81011022>] kvm_arch_hardware_disable+0x26/0x3c
[ 1723.158153]  [<ffffffff8100b83b>] hardware_disable+0x3f/0x55
[ 1723.172168]  [<ffffffff810b95f6>] generic_smp_call_function_interrupt+0x76/0x13c
[ 1723.178836]  [<ffffffff8104cbea>] smp_call_function_interrupt+0x3a/0x5e
[ 1723.194689]  [<ffffffff81035bf3>] call_function_interrupt+0x13/0x20
[ 1723.199750]  <EOI> <0> [<ffffffff814ad3b4>] ? acpi_idle_enter_c1+0xd3/0xf4
[ 1723.217508]  [<ffffffff814ad3ae>] ? acpi_idle_enter_c1+0xcd/0xf4
[ 1723.232172]  [<ffffffff814ad4bc>] ? acpi_idle_enter_bm+0xe7/0x2ce
[ 1723.235141]  [<ffffffff81a8d93f>] ? __atomic_notifier_call_chain+0x0/0xac
[ 1723.253381]  [<ffffffff818c3dff>] ? menu_select+0x58/0xd2
[ 1723.258179]  [<ffffffff818c2c9d>] ? cpuidle_idle_call+0xa4/0xf3
[ 1723.272828]  [<ffffffff81034085>] ? cpu_idle+0xb8/0x101
[ 1723.277085]  [<ffffffff81a80163>] ? start_secondary+0x1bc/0x1d7
[ 1723.293708] Code: b0 00 00 65 48 8b 04 25 28 00 00 00 48 89 45 e0 31 c0 48 8b 04 cd 30 ee 27 82 49 89 cc 49 89 d5 48 8b 04 10 48 8d b8 90 d7 ff ff <48> 8b 87 70 28 00 00 48 8d 98 90 d7 ff ff eb 16 e8 e9 fe ff ff
[ 1723.335524] RIP  [<ffffffff8102c6b6>] hardware_disable+0x4c/0xb4
[ 1723.342076]  RSP <ffffc900010b6ed8>
[ 1723.352021] CR2: 0000000000000000
[ 1723.354348] ---[ end trace e2aec53dae150aa1 ]---

it turns out that we need clear cpus_hardware_enabled in that case.

Reported-and-tested-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
14 years agocpumask: alloc zeroed cpumask for static cpumask_var_ts
Yinghai Lu [Sat, 6 Jun 2009 21:51:36 +0000 (14:51 -0700)]
cpumask: alloc zeroed cpumask for static cpumask_var_ts

These are defined as static cpumask_var_t so if MAXSMP is not used,
they are cleared already.  Avoid surprises when MAXSMP is enabled.

Signed-off-by: Yinghai Lu <yinghai.lu@kernel.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
14 years agocpumask: introduce zalloc_cpumask_var
Yinghai Lu [Sat, 6 Jun 2009 21:50:36 +0000 (14:50 -0700)]
cpumask: introduce zalloc_cpumask_var

So can get cpumask_var with cpumask_clear

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
14 years agocls_cgroup: Fix oops when user send improperly 'tc filter add' request
Minoru Usui [Tue, 9 Jun 2009 11:03:09 +0000 (04:03 -0700)]
cls_cgroup: Fix oops when user send improperly 'tc filter add' request

I found a bug in cls_cgroup_change() in cls_cgroup.c.
cls_cgroup_change() expected tca[TCA_OPTIONS] was set from user space properly,
but tc in iproute2-2.6.29-1 (which I used) didn't set it.

In the current source code of tc in git, it set tca[TCA_OPTIONS].

  git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git

If we always use a newest iproute2 in git when we use cls_cgroup,
we don't face this oops probably.
But I think, kernel shouldn't panic regardless of use program's behaviour.

Signed-off-by: Minoru Usui <usui@mxm.nes.nec.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
14 years agor8169: fix crash when large packets are received
Eric Dumazet [Tue, 9 Jun 2009 11:01:02 +0000 (04:01 -0700)]
r8169: fix crash when large packets are received

Michael Tokarev reported receiving a large packet could crash
a machine with RTL8169 NIC.
( original thread at http://lkml.org/lkml/2009/6/8/192 )

Problem is this driver tells that NIC frames up to 16383 bytes
can be received but provides skb to rx ring allocated with
smaller sizes (1536 bytes in case standard 1500 bytes MTU is used)

When a frame larger than what was allocated by driver is received,
dma transfert can occurs past the end of buffer and corrupt
kernel memory.

Fix is to tell to NIC what is the maximum size a frame can be.

This bug is very old, (before git introduction, linux-2.6.10), and
should be backported to stable versions.

Reported-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Tested-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
14 years agomd/raid5: fix bug in reshape code when chunk_size decreases.
NeilBrown [Tue, 9 Jun 2009 06:32:22 +0000 (16:32 +1000)]
md/raid5: fix bug in reshape code when chunk_size decreases.

Now that we support changing the chunksize, we calculate
"reshape_sectors" to be the max of number of sectors in old
and new chunk size.
However there is one please where we still use 'chunksize'
rather than 'reshape_sectors'.
This causes a reshape that reduces the size of chunks to freeze.

Signed-off-by: NeilBrown <neilb@suse.de>
14 years agomd/raid5 - avoid deadlocks in get_active_stripe during reshape
NeilBrown [Tue, 9 Jun 2009 04:39:59 +0000 (14:39 +1000)]
md/raid5 - avoid deadlocks in get_active_stripe during reshape

md has functionality to 'quiesce' and array so that all pending
IO completed and no new IO starts.  This is used to achieve a
stable state before making internal changes.

Currently this quiescing applies equally to normal IO, resync
IO, and reshape IO.
However there is a problem with applying it to reshape IO.
Reshape can have multiple 'stripe_heads' that must be active together.
If the quiesce come between allocating the first and the last of
such a collection, then we deadlock, as the last will not be allocated
until the quiesce is lifted, the quiesce will not be lifted until the
first (which has been allocated) gets used, and that first cannot be
used until the last is allocated.

It is not necessary to inhibit reshape IO when a quiesce is
requested.  Those places in the code that require a full quiesce will
ensure the reshape thread is not running at all.

So allow reshape requests to get access to new stripe_heads without
being blocked by a 'quiesce'.

This only affects in-place reshapes (i.e. where the array does not
grow or shrink) and these are only newly supported.  So this patch is
not needed in earlier kernels.

Signed-off-by: NeilBrown <neilb@suse.de>
14 years agomd/raid5: use conf->raid_disks in preference to mddev->raid_disk
NeilBrown [Tue, 9 Jun 2009 04:30:31 +0000 (14:30 +1000)]
md/raid5: use conf->raid_disks in preference to mddev->raid_disk

mddev->raid_disks can be changed and any time by a request from
user-space.  It is a suggestion as to what number of raid_disks is
desired.

conf->raid_disks can only be changed by the raid5 module with suitable
locks in place.  It is a statement as to the current number of
raid_disks.

There are two places where the latter should be used, but the former
is used.  This can lead to a crash when reshaping an array.

This patch changes to mddev-> to conf->

Signed-off-by: NeilBrown <neilb@suse.de>
14 years agoasync: Fix lack of boot-time console due to insufficient synchronization
Linus Torvalds [Mon, 8 Jun 2009 19:31:53 +0000 (12:31 -0700)]
async: Fix lack of boot-time console due to insufficient synchronization

Our async work synchronization was broken by "async: make sure
independent async domains can't accidentally entangle" (commit
d5a877e8dd409d8c702986d06485c374b705d340), because it would report
the wrong lowest active async ID when there was both running and
pending async work.

This caused things like no being able to read the root filesystem,
resulting in missing console devices and inability to run 'init',
causing a boot-time panic.

This fixes it by properly returning the lowest pending async ID: if
there is any running async work, that will have a lower ID than any
pending work, and we should _not_ look at the pending work list.

There were alternative patches from Jaswinder and James, but this one
also cleans up the code by removing the pointless 'ret' variable and
the unnecesary testing for an empty list around 'for_each_entry()' (if
the list is empty, the for_each_entry() thing just won't execute).

Fixes-bug: http://bugzilla.kernel.org/show_bug.cgi?id=13474
Reported-and-tested-by: Chris Clayton <chris2553@googlemail.com>
Cc: Jaswinder Singh Rajput <jaswinder@kernel.org>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
14 years agoMerge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus
Linus Torvalds [Mon, 8 Jun 2009 16:22:53 +0000 (09:22 -0700)]
Merge branch 'upstream' of git://ftp.linux-mips.org/upstream-linus

* 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus:
  MIPS: Outline udelay and fix a few issues.
  MIPS: ioctl.h: Fix headers_check warnings
  MIPS: Cobalt: PCI bus is always required to obtain the board ID
  MIPS: Kconfig: Remove "Support for" from Cavium system type
  MIPS: Sibyte: Honor CONFIG_CMDLINE
  SSB: BCM47xx: Export ssb_watchdog_timer_set

14 years agopata_netcell: Fix typo
Alan Cox [Mon, 8 Jun 2009 11:31:00 +0000 (12:31 +0100)]
pata_netcell: Fix typo

The previous patch submission had a I typo I didn't catch but Bartlomiej
noted. Guess this proves the point about any patch being risky late in an rc

Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
14 years agoMerge branch 'kvm-updates/2.6.30' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Linus Torvalds [Mon, 8 Jun 2009 16:05:48 +0000 (09:05 -0700)]
Merge branch 'kvm-updates/2.6.30' of git://git./virt/kvm/kvm

* 'kvm-updates/2.6.30' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: Explicity initialize cpus_hardware_enabled

14 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6
Linus Torvalds [Mon, 8 Jun 2009 16:04:55 +0000 (09:04 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/bart/ide-2.6

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6:
  pdc202xx_old: fix resetproc() method
  pdc202xx_old: fix 'pdc20246_dma_ops'

14 years agoMIPS: Outline udelay and fix a few issues.
Ralf Baechle [Sat, 28 Feb 2009 09:44:28 +0000 (09:44 +0000)]
MIPS: Outline udelay and fix a few issues.

Outlining fixes the issue were on certain CPUs such as the R10000 family
the delay loop would need an extra cycle if it overlaps a cacheline
boundary.

The rewrite also fixes build errors with GCC 4.4 which was changed in
way incompatible with the kernel's inline assembly.

Relying on pure C for computation of the delay value removes the need for
explicit.  The price we pay is a slight slowdown of the computation - to
be fixed on another day.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
14 years agoMIPS: ioctl.h: Fix headers_check warnings
Jaswinder Singh Rajput [Thu, 4 Jun 2009 12:35:49 +0000 (18:05 +0530)]
MIPS: ioctl.h: Fix headers_check warnings

Make ioctl.h compatible with asm-generic/ioctl.h and userspace

fix the following 'make headers_check' warning:

  usr/include/asm-mips/ioctl.h:64: extern's make no sense in userspace

Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
14 years agoMIPS: Cobalt: PCI bus is always required to obtain the board ID
Yoichi Yuasa [Tue, 2 Jun 2009 14:17:07 +0000 (23:17 +0900)]
MIPS: Cobalt: PCI bus is always required to obtain the board ID

Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
14 years agoMIPS: Kconfig: Remove "Support for" from Cavium system type
Yoichi Yuasa [Tue, 2 Jun 2009 14:15:10 +0000 (23:15 +0900)]
MIPS: Kconfig: Remove "Support for" from Cavium system type

Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
Acked-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
14 years agoMIPS: Sibyte: Honor CONFIG_CMDLINE
Ralf Baechle [Tue, 2 Jun 2009 18:05:28 +0000 (19:05 +0100)]
MIPS: Sibyte: Honor CONFIG_CMDLINE

Original patch by Imre Kaloz <kaloz@openwrt.org>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
14 years agoSSB: BCM47xx: Export ssb_watchdog_timer_set
Matthieu Castet [Fri, 22 May 2009 20:25:04 +0000 (22:25 +0200)]
SSB: BCM47xx: Export ssb_watchdog_timer_set

this patch export ssb_watchdog_timer_set to allow to use it in a Linux
watchdog driver.

Signed-off-by: Matthieu CASTET <castet.matthieu@free.fr>
Acked-by : Michael Buesch <mb@bu3sch.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
14 years agoMerge master.kernel.org:/home/rmk/linux-2.6-arm
Linus Torvalds [Mon, 8 Jun 2009 15:29:31 +0000 (08:29 -0700)]
Merge master.kernel.org:/home/rmk/linux-2.6-arm

* master.kernel.org:/home/rmk/linux-2.6-arm:
  [ARM] 5543/1: arm: serial amba: add missing declaration in serial.h
  [ARM] pxa: fix pxa27x_udc default pullup GPIO
  [ARM] pxa/imote2: fix UCAM sensor board ADC model number
  mx[23]: don't put clock lookups in __initdata
  fix oops when using console=ttymxcN with N > 0
  [ARM] ARMv7 errata: only apply fixes when running on applicable CPU
  [ARM] 5534/1: kmalloc must return a cache line aligned buffer

14 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/drzeus/mmc
Linus Torvalds [Mon, 8 Jun 2009 14:53:59 +0000 (07:53 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/drzeus/mmc

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/drzeus/mmc:
  sdhci-of: Fix the wrong accessor to HOSTVER register
  mvsdio: fix config failure with some high speed SDHC cards
  mvsdio: ignore high speed timing requests from the core
  mmc/omap: Use disable_irq_nosync() from within irq handlers.
  sdhci-of: Add fsl,esdhc as a valid compatible to bind against
  mvsdio: allow automatic loading when modular
  mxcmmc: Fix missing return value checking in DMA setup code.
  mxcmmc : Reset the SDHC hardware if software timeout occurs.
  omap_hsmmc: Trivial fix for a typo in comment
  mxcmmc: decrease minimum frequency to make MMC cards work

14 years agoKVM: Explicity initialize cpus_hardware_enabled
Avi Kivity [Sat, 6 Jun 2009 09:34:39 +0000 (12:34 +0300)]
KVM: Explicity initialize cpus_hardware_enabled

Under CONFIG_MAXSMP, cpus_hardware_enabled is allocated from the heap and
not statically initialized.  This causes a crash on reboot when kvm thinks
vmx is enabled on random nonexistent cpus and accesses nonexistent percpu
lists.

Fix by explicitly clearing the variable.

Cc: stable@kernel.org
Reported-and-tested-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Avi Kivity <avi@redhat.com>
14 years ago[ARM] 5543/1: arm: serial amba: add missing declaration in serial.h
Alessandro Rubini [Sat, 6 Jun 2009 09:17:57 +0000 (10:17 +0100)]
[ARM] 5543/1: arm: serial amba: add missing declaration in serial.h

This header is sometimes included in the uncompress stage to get
register values, but no <linux/amba/bus.h> can be included there.
So declare "struct amba_device" here before using it in a prototype.

Signed-off-by: Alessandro Rubini <rubini@unipv.it>
Acked-by: Andrea Gallo <andrea.gallo@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
14 years agopdc202xx_old: fix resetproc() method
Sergei Shtylyov [Sun, 7 Jun 2009 11:52:50 +0000 (13:52 +0200)]
pdc202xx_old: fix resetproc() method

pdc202xx_reset() calls pdc202xx_reset_host() twice, for both channels, while
that function actually twiddles the single, shared software reset bit -- the
net effect is a duplicated reset and horrendous 4 second delay happening not
only on a channel reset but also when dma_lost_irq() and dma_clear() methods
are called.  Fold pdc202xx_reset_host() into pdc202xx_reset(), fix printk(),
and move it before the actual reset...

Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
14 years agopdc202xx_old: fix 'pdc20246_dma_ops'
Sergei Shtylyov [Sun, 7 Jun 2009 11:52:50 +0000 (13:52 +0200)]
pdc202xx_old: fix 'pdc20246_dma_ops'

Commit ac95beedf8bc97b24f9540d4da9952f07221c023 (ide: add struct ide_port_ops
(take 2)) erroneously converted the driver's dma_timeout() and dma_lost_irq()
methods to call the driver's resetproc() method regardless of whether it was
defined for this specific controller while it hadn't been defined and hence
called for PDC20246. So the dma_clear() method, the successor of dma_timeout(),
shouldn't exist and the dma_lost_irq() method should be standard for PDC20246.

Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>