Leonid I Ananiev <leonid.i.ananiev@intel.com>
Linas Vepstas <linas@austin.ibm.com>
Matthieu CASTET <castet.matthieu@free.fr>
+Michael Buesch <mb@bu3sch.de>
+Michael Buesch <mbuesch@freenet.de>
Michel Dänzer <michel@tungstengraphics.com>
Mitesh shah <mshah@teja.com>
Morten Welinder <terra@gnome.org>
--- /dev/null
+What: dv1394 (a.k.a. "OHCI-DV I/O support" for FireWire)
+Contact: linux1394-devel@lists.sourceforge.net
+Description:
+ New application development should use raw1394 + userspace libraries
+ instead, notably libiec61883 which is functionally equivalent.
+
+Users:
+ ffmpeg/libavformat (used by a variety of media players)
+ dvgrab v1.x (replaced by dvgrab2 on top of raw1394 and resp. libraries)
Add some cpus:
# /bin/echo 0-7 > cpus
+Add some mems:
+# /bin/echo 0-7 > mems
+
Now attach your shell to this cpuset:
# /bin/echo $$ > tasks
desc.tfm = tfm;
desc.flags = 0;
- if (crypto_hash_digest(&desc, &sg, 2, result))
+ if (crypto_hash_digest(&desc, sg, 2, result))
fail();
crypto_free_hash(tfm);
---------------------------
-What: dv1394 driver (CONFIG_IEEE1394_DV1394)
-When: June 2007
-Why: Replaced by raw1394 + userspace libraries, notably libiec61883. This
- shift of application support has been indicated on www.linux1394.org
- and developers' mailinglists for quite some time. Major applications
- have been converted, with the exception of ffmpeg and hence xine.
- Piped output of dvgrab2 is a partial equivalent to dv1394.
-Who: Dan Dennedy <dan@dennedy.org>, Stefan Richter <stefanr@s5r6.in-berlin.de>
-
----------------------------
-
What: Video4Linux API 1 ioctls and video_decoder.h from Video devices.
When: December 2006
Why: V4L1 AP1 was replaced by V4L2 API. during migration from 2.4 to 2.6
Who: Johannes Berg <johannes@sipsolutions.net>
---------------------------
+
+What: i8xx_tco watchdog driver
+When: in 2.6.22
+Why: the i8xx_tco watchdog driver has been replaced by the iTCO_wdt
+ watchdog driver.
+Who: Wim Van Sebroeck <wim@iguana.be>
+
+---------------------------
- Output values are writable (high=1, low=0). Some chips also have
options about how that value is driven, so that for example only one
value might be driven ... supporting "wire-OR" and similar schemes
- for the other value.
+ for the other value (notably, "open drain" signaling).
- Input values are likewise readable (1, 0). Some chips support readback
of pins configured as "output", which is very useful in such "wire-OR"
/* set as input or output, returning 0 or negative errno */
int gpio_direction_input(unsigned gpio);
- int gpio_direction_output(unsigned gpio);
+ int gpio_direction_output(unsigned gpio, int value);
The return value is zero for success, else a negative errno. It should
be checked, since the get/set calls don't have error returns and since
misconfiguration is possible. (These calls could sleep.)
+For output GPIOs, the value provided becomes the initial output value.
+This helps avoid signal glitching during system startup.
+
Setting the direction can fail if the GPIO number is invalid, or when
that particular GPIO can't be used in that mode. It's generally a bad
idea to rely on boot firmware to have set the direction correctly, since
when the IRQ is edge-triggered.
+Emulating Open Drain Signals
+----------------------------
+Sometimes shared signals need to use "open drain" signaling, where only the
+low signal level is actually driven. (That term applies to CMOS transistors;
+"open collector" is used for TTL.) A pullup resistor causes the high signal
+level. This is sometimes called a "wire-AND"; or more practically, from the
+negative logic (low=true) perspective this is a "wire-OR".
+
+One common example of an open drain signal is a shared active-low IRQ line.
+Also, bidirectional data bus signals sometimes use open drain signals.
+
+Some GPIO controllers directly support open drain outputs; many don't. When
+you need open drain signaling but your hardware doesn't directly support it,
+there's a common idiom you can use to emulate it with any GPIO pin that can
+be used as either an input or an output:
+
+ LOW: gpio_direction_output(gpio, 0) ... this drives the signal
+ and overrides the pullup.
+
+ HIGH: gpio_direction_input(gpio) ... this turns off the output,
+ so the pullup (or some other device) controls the signal.
+
+If you are "driving" the signal high but gpio_get_value(gpio) reports a low
+value (after the appropriate rise time passes), you know some other component
+is driving the shared signal low. That's not necessarily an error. As one
+common example, that's how I2C clocks are stretched: a slave that needs a
+slower clock delays the rising edge of SCK, and the I2C master adjusts its
+signaling rate accordingly.
+
What do these conventions omit?
===============================
See also Documentation/pm.txt, pci=noacpi
+ acpi_apic_instance= [ACPI, IOAPIC]
+ Format: <int>
+ 2: use 2nd APIC table, if available
+ 1,0: use 1st APIC table
+ default: 0
+
acpi_sleep= [HW,ACPI] Sleep options
Format: { s3_bios, s3_mode }
See Documentation/power/video.txt
lapic [IA-32,APIC] Enable the local APIC even if BIOS
disabled it.
+ lapic_timer_c2_ok [IA-32,x86-64,APIC] trust the local apic timer in
+ C2 power state.
+
lasi= [HW,SCSI] PARISC LASI driver for the 53c700 chip
Format: addr:<io>,irq:<irq>
nolapic [IA-32,APIC] Do not enable or use the local APIC.
+ nolapic_timer [IA-32,APIC] Do not use the local APIC timer.
+
noltlbs [PPC] Do not use large page/tlb entries for kernel
lowmem mapping on PPC40x.
To use the amateur radio protocols within Linux you will need to get a
-suitable copy of the AX.25 Utilities. More detailed information about these
-and associated programs can be found on http://zone.pspt.fi/~jsn/.
-
-For more information about the AX.25, NET/ROM and ROSE protocol stacks, see
-the AX25-HOWTO written by Terry Dawson <terry@perf.no.itg.telstra.com.au>
-who is also the AX.25 Utilities maintainer.
+suitable copy of the AX.25 Utilities. More detailed information about
+AX.25, NET/ROM and ROSE, associated programs and and utilities can be
+found on http://www.linux-ax25.org.
There is an active mailing list for discussing Linux amateur radio matters
-called linux-hams. To subscribe to it, send a message to
+called linux-hams@vger.kernel.org. To subscribe to it, send a message to
majordomo@vger.kernel.org with the words "subscribe linux-hams" in the body
-of the message, the subject field is ignored.
-
-Jonathan G4KLX
-
-g4klx@g4klx.demon.co.uk
+of the message, the subject field is ignored. You don't need to be
+subscribed to post but of course that means you might miss an answer.
Functional default: enabled if local forwarding is disabled.
disabled if local forwarding is enabled.
+accept_source_route - INTEGER
+ Accept source routing (routing extension header).
+
+ > 0: Accept routing header.
+ = 0: Accept only routing header type 2.
+ < 0: Do not accept routing header.
+
+ Default: 0
+
autoconf - BOOLEAN
Autoconfigure addresses using Prefix Information in Router
Advertisements.
--------------
Usage:
- pci_save_state(dev, buffer);
+ pci_save_state(struct pci_dev *dev);
Description:
- Save first 64 bytes of PCI config space. Buffer must be allocated by
- caller.
+ Save first 64 bytes of PCI config space, along with any additional
+ PCI-Express or PCI-X information.
pci_restore_state
-----------------
Usage:
- pci_restore_state(dev, buffer);
+ pci_restore_state(struct pci_dev *dev);
Description:
- Restore previously saved config space. (First 64 bytes only);
-
- If buffer is NULL, then restore what information we know about the
- device from bootup: BARs and interrupt line.
+ Restore previously saved config space.
pci_set_power_state
-------------------
Usage:
- pci_set_power_state(dev, state);
+ pci_set_power_state(struct pci_dev *dev, pci_power_t state);
Description:
Transition device to low power state using PCI PM Capabilities
---------------
Usage:
- pci_enable_wake(dev, state, enable);
+ pci_enable_wake(struct pci_dev *dev, pci_power_t state, int enable);
Description:
Enable device to generate PME# during low power state using PCI PM
basic 3-jack (default)
hp HP nx6320
thinkpad Lenovo Thinkpad T60/X60/Z60
+ toshiba Toshiba U205
AD1986A
6stack 6-jack, separate surrounds (default)
5stack D945 5stack + SPDIF
macmini Intel Mac Mini
macbook Intel Mac Book
- macbook-pro Intel Mac Book Pro
+ macbook-pro-v1 Intel Mac Book Pro 1st generation
+ macbook-pro Intel Mac Book Pro 2nd generation
STAC9202/9250/9251
ref Reference board, base config
'p' - Will dump the current registers and flags to your console.
+'q' - Will dump a list of all running timers.
+
'r' - Turns off keyboard raw mode and sets it to XLATE.
's' - Will attempt to sync all mounted filesystems.
stuck (default)
Miscellaneous
-
- noreplacement Don't replace instructions with more appropriate ones
- for the CPU. This may be useful on asymmetric MP systems
- where some CPUs have less capabilities than others.
W: http://www.stud.uni-karlsruhe.de/~uh1b/
S: Maintained
+IPS SCSI RAID DRIVER
+P: Adaptec OEM Raid Solutions
+M: aacraid@adaptec.com
+L: linux-scsi@vger.kernel.org
+W: http://www.adaptec.com/
+S: Maintained
+
+DPT_I2O SCSI RAID DRIVER
+P: Adaptec OEM Raid Solutions
+M: aacraid@adaptec.com
+L: linux-scsi@vger.kernel.org
+W: http://www.adaptec.com/
+S: Maintained
+
AACRAID SCSI RAID DRIVER
P: Adaptec OEM Raid Solutions
+M: aacraid@adaptec.com
L: linux-scsi@vger.kernel.org
-W: http://linux.dell.com/storage.shtml
+W: http://www.adaptec.com/
S: Supported
ACPI
ETHERNET BRIDGE
P: Stephen Hemminger
M: shemminger@linux-foundation.org
-L: bridge@lists.osdl.org
+L: bridge@lists.linux-foundation.org
W: http://bridge.sourceforge.net/
S: Maintained
W: http://www.farsite.co.uk/
S: Supported
+FAULT INJECTION SUPPORT
+P: Akinobu Mita
+M: akinobu.mita@gmail.com
+S: Supported
+
FRAMEBUFFER LAYER
P: Antonino Daplas
M: adaplas@gmail.com
W: ftp://ftp.openlinux.org/pub/people/hch/vxfs
S: Maintained
-FUJITSU FR-V PORT
+FUJITSU FR-V (FRV) PORT
P: David Howells
M: dhowells@redhat.com
S: Maintained
T: quilt http://khali.linux-fr.org/devel/linux-2.6/jdelvare-i2c/
S: Maintained
-I2O
-P: Markus Lidel
-M: markus.lidel@shadowconnect.com
-W: http://i2o.shadowconnect.com/
-S: Maintained
-
i386 BOOT CODE
P: Riley H. Williams
M: Riley@Williams.Name
IEEE 1394 SUBSYSTEM
P: Ben Collins
-M: bcollins@debian.org
+M: ben.collins@ubuntu.com
P: Stefan Richter
M: stefanr@s5r6.in-berlin.de
L: linux1394-devel@lists.sourceforge.net
T: git kernel.org:/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6.git
S: Maintained
-IEEE 1394 IPV4 DRIVER (eth1394)
-P: Stefan Richter
-M: stefanr@s5r6.in-berlin.de
-L: linux1394-devel@lists.sourceforge.net
-S: Odd Fixes
-
-IEEE 1394 PCILYNX DRIVER
-P: Jody McIntyre
-M: scjody@modernduck.com
-P: Stefan Richter
-M: stefanr@s5r6.in-berlin.de
-L: linux1394-devel@lists.sourceforge.net
-S: Odd Fixes
-
-IEEE 1394 RAW I/O DRIVER
-P: Ben Collins
-M: bcollins@debian.org
+IEEE 1394 RAW I/O DRIVER (raw1394)
P: Dan Dennedy
M: dan@dennedy.org
+P: Stefan Richter
+M: stefanr@s5r6.in-berlin.de
L: linux1394-devel@lists.sourceforge.net
S: Maintained
M: vgoyal@in.ibm.com
P: Haren Myneni
M: hbabu@us.ibm.com
-L: fastboot@lists.osdl.org
+L: fastboot@lists.linux-foundation.org
L: linux-kernel@vger.kernel.org
W: http://lse.sourceforge.net/kdump/
S: Maintained
KERNEL JANITORS
P: Several
-L: kernel-janitors@lists.osdl.org
+L: kernel-janitors@lists.linux-foundation.org
W: http://www.kerneljanitors.org/
S: Maintained
M: ebiederm@xmission.com
W: http://www.xmission.com/~ebiederm/files/kexec/
L: linux-kernel@vger.kernel.org
-L: fastboot@lists.osdl.org
+L: fastboot@lists.linux-foundation.org
S: Maintained
KPROBES
NETEM NETWORK EMULATOR
P: Stephen Hemminger
M: shemminger@linux-foundation.org
-L: netem@lists.osdl.org
+L: netem@lists.linux-foundation.org
S: Maintained
NETFILTER/IPTABLES/IPCHAINS
S: Maintained
SCTP PROTOCOL
+P: Vlad Yasevich
+M: vladislav.yasevich@hp.com
P: Sridhar Samudrala
M: sri@us.ibm.com
L: lksctp-developers@lists.sourceforge.net
+W: http://lksctp.sourceforge.net
S: Supported
SCx200 CPU SUPPORT
SOFTWARE SUSPEND:
P: Pavel Machek
M: pavel@suse.cz
-L: linux-pm@lists.osdl.org
+L: linux-pm@lists.linux-foundation.org
S: Maintained
SONIC NETWORK DRIVER
S: Maintained
SONY VAIO CONTROL DEVICE DRIVER
-P: Stelian Pop
-M: stelian@popies.net
P: Mattia Dongili
M: malattia@linux.it
-W: http://popies.net/sonypi/
+L: linux-acpi@vger.kernel.org
+W: http://www.linux.it/~malattia/wiki/index.php/Sony_drivers
S: Maintained
SOUND
P: Kylene Hall
M: kjhall@us.ibm.com
W: http://tpmdd.sourceforge.net
+P: Marcel Selhorst
+M: tpm@selhorst.net
+W: http://www.prosec.rub.de/tpm/
L: tpmdd-devel@lists.sourceforge.net
S: Maintained
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 21
-EXTRAVERSION = -rc3
-NAME = Homicidal Dwarf Hamster
+EXTRAVERSION =
+NAME = Nocturnal Monster Puppy
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
# define DBG_CFG(args)
#endif
-#define MCPCIA_MAX_HOSES 4
-
/*
* Given a bus, device, and function number, compute resulting
* configuration space address and setup the MCPCIA_HAXR2 register
#include <asm/smp.h>
#include <asm/err_common.h>
#include <asm/err_ev6.h>
+#include <asm/irq_regs.h>
#include "err_impl.h"
#include "proto.h"
reloc_overflow:
if (ELF64_ST_TYPE (sym->st_info) == STT_SECTION)
printk(KERN_ERR
- "module %s: Relocation overflow vs section %d\n",
- me->name, sym->st_shndx);
+ "module %s: Relocation (type %lu) overflow vs section %d\n",
+ me->name, r_type, sym->st_shndx);
else
printk(KERN_ERR
- "module %s: Relocation overflow vs %s\n",
- me->name, strtab + sym->st_name);
+ "module %s: Relocation (type %lu) overflow vs %s\n",
+ me->name, r_type, strtab + sym->st_name);
return -ENOEXEC;
}
}
/* Preserve the IRQ set up by the console. */
u8 irq;
+ /* UP1500: AGP INTA is actually routed to IRQ 5, not IRQ 10 as
+ console reports. Check the device id of AGP bridge to distinguish
+ UP1500 from UP1000/1100. Note: 'pin' is 2 due to bridge swizzle. */
+ if (slot == 1 && pin == 2 &&
+ dev->bus->self && dev->bus->self->device == 0x700f)
+ return 5;
pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq);
return irq;
}
return 0;
}
+static void
+noritake_end_irq(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
+ noritake_enable_irq(irq);
+}
+
static struct hw_interrupt_type noritake_irq_type = {
.typename = "NORITAKE",
.startup = noritake_startup_irq,
.enable = noritake_enable_irq,
.disable = noritake_disable_irq,
.ack = noritake_disable_irq,
- .end = noritake_enable_irq,
+ .end = noritake_end_irq,
};
static void
*(vuip)MCPCIA_INT_MASK0(MCPCIA_HOSE2MID(hose));
}
+#define hose_exists(h) \
+ (((h) < MCPCIA_MAX_HOSES) && (cached_irq_masks[(h)] != 0))
+
static inline void
rawhide_enable_irq(unsigned int irq)
{
irq -= 16;
hose = irq / 24;
+ if (!hose_exists(hose)) /* if hose non-existent, exit */
+ return;
+
irq -= hose * 24;
mask = 1 << irq;
irq -= 16;
hose = irq / 24;
+ if (!hose_exists(hose)) /* if hose non-existent, exit */
+ return;
+
irq -= hose * 24;
mask = ~(1 << irq) | hose_irq_masks[hose];
irq -= 16;
hose = irq / 24;
+ if (!hose_exists(hose)) /* if hose non-existent, exit */
+ return;
+
irq -= hose * 24;
mask1 = 1 << irq;
mask = ~mask1 | hose_irq_masks[hose];
mcpcia_init_hoses();
+ /* Clear them all; only hoses that exist will be non-zero. */
+ for (i = 0; i < MCPCIA_MAX_HOSES; i++) cached_irq_masks[i] = 0;
+
for (hose = hose_head; hose; hose = hose->next) {
unsigned int h = hose->index;
unsigned int mask = hose_irq_masks[h];
static void __init
sio_pci_route(void)
{
-#if defined(ALPHA_RESTORE_SRM_SETUP)
- /* First, read and save the original setting. */
+ unsigned int orig_route_tab;
+
+ /* First, ALWAYS read and print the original setting. */
pci_bus_read_config_dword(pci_isa_hose->bus, PCI_DEVFN(7, 0), 0x60,
- &saved_config.orig_route_tab);
+ &orig_route_tab);
printk("%s: PIRQ original 0x%x new 0x%x\n", __FUNCTION__,
- saved_config.orig_route_tab, alpha_mv.sys.sio.route_tab);
+ orig_route_tab, alpha_mv.sys.sio.route_tab);
+
+#if defined(ALPHA_RESTORE_SRM_SETUP)
+ saved_config.orig_route_tab = orig_route_tab;
#endif
/* Now override with desired setting. */
.pci_swizzle = common_swizzle,
.sys = { .sio = {
- .route_tab = 0x0b0a0e0f,
+ .route_tab = 0x0b0a050f, /* leave 14 for IDE, 9 for SND */
}}
};
ALIAS_MV(avanti)
if (amask(AMASK_MAX) != 0
&& alpha_using_srm
- && (cpu->pal_revision & 0xffff) == 0x117) {
+ && (cpu->pal_revision & 0xffff) <= 0x117) {
__asm__ __volatile__(
"lda $16,8($31)\n"
"call_pal 9\n" /* Allow PALRES insns in kernel mode */
*/
while (mask) {
/* convert to SRM vector... priority is <63> -> <0> */
- __asm__("ctlz %1, %0" : "=r"(vector) : "r"(mask));
- vector = 63 - vector;
+ vector = 63 - __kernel_ctlz(mask);
mask &= ~(1UL << vector); /* clear it out */
vector = 0x900 + (vector << 4); /* convert to SRM vector */
config SHARPSL_PM
bool
+ select APM_EMULATION
config SHARP_SCOOP
bool
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.21-rc1
-# Wed Feb 21 16:48:01 2007
+# Linux kernel version: 2.6.21-rc6
+# Mon Apr 9 10:12:58 2007
#
CONFIG_ARM=y
CONFIG_SYS_SUPPORTS_APM_EMULATION=y
+CONFIG_GENERIC_GPIO=y
# CONFIG_GENERIC_TIME is not set
CONFIG_MMU=y
CONFIG_NO_IOPORT=y
# CONFIG_IKCONFIG is not set
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_RELAY is not set
+CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024
-CONFIG_BLK_DEV_INITRD=y
# CONFIG_CDROM_PKTCDVD is not set
CONFIG_ATA_OVER_ETH=m
CONFIG_BLK_DEV_IDE_BAST=y
# CONFIG_IDE_CHIPSETS is not set
# CONFIG_BLK_DEV_IDEDMA is not set
-# CONFIG_IDEDMA_AUTO is not set
# CONFIG_BLK_DEV_HD is not set
#
# LED drivers
#
CONFIG_LEDS_S3C24XX=m
+CONFIG_LEDS_H1940=m
#
# LED Triggers
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
+# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
#
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
-# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
CONFIG_RTC_DRV_S3C=y
{
return dma_chan[channel].active;
}
+EXPORT_SYMBOL(dma_channel_active);
void set_dma_page(dmach_t channel, char pagenr)
{
{
int cpu;
- for_each_possible_cpu(cpu)
- register_cpu(&per_cpu(cpu_data, cpu).cpu, cpu);
+ for_each_possible_cpu(cpu) {
+ struct cpuinfo_arm *cpuinfo = &per_cpu(cpu_data, cpu);
+ cpuinfo->cpu.hotpluggable = 1;
+ register_cpu(&cpuinfo->cpu, cpu);
+ }
return 0;
}
at91_sys_write(AT91_SMC_SETUP(3), AT91_SMC_NWESETUP_(0) | AT91_SMC_NCS_WRSETUP_(0)
| AT91_SMC_NRDSETUP_(0) | AT91_SMC_NCS_RDSETUP_(0));
- at91_sys_write(AT91_SMC_PULSE(3), AT91_SMC_NWEPULSE_(2) | AT91_SMC_NCS_WRPULSE_(5)
- | AT91_SMC_NRDPULSE_(2) | AT91_SMC_NCS_RDPULSE_(5));
+ at91_sys_write(AT91_SMC_PULSE(3), AT91_SMC_NWEPULSE_(3) | AT91_SMC_NCS_WRPULSE_(3)
+ | AT91_SMC_NRDPULSE_(3) | AT91_SMC_NCS_RDPULSE_(3));
- at91_sys_write(AT91_SMC_CYCLE(3), AT91_SMC_NWECYCLE_(7) | AT91_SMC_NRDCYCLE_(7));
+ at91_sys_write(AT91_SMC_CYCLE(3), AT91_SMC_NWECYCLE_(5) | AT91_SMC_NRDCYCLE_(5));
if (data->bus_width_16)
mode = AT91_SMC_DBW_16;
else
mode = AT91_SMC_DBW_8;
- at91_sys_write(AT91_SMC_MODE(3), mode | AT91_SMC_READMODE | AT91_SMC_WRITEMODE | AT91_SMC_EXNWMODE_DISABLE | AT91_SMC_TDF_(1));
+ at91_sys_write(AT91_SMC_MODE(3), mode | AT91_SMC_READMODE | AT91_SMC_WRITEMODE | AT91_SMC_EXNWMODE_DISABLE | AT91_SMC_TDF_(2));
/* enable pin */
if (data->enable_pin)
}
EXPORT_SYMBOL(gpio_direction_input);
-int gpio_direction_output(unsigned pin)
+int gpio_direction_output(unsigned pin, int value)
{
void __iomem *pio = pin_to_controller(pin);
unsigned mask = pin_to_mask(pin);
if (!pio || !(__raw_readl(pio + PIO_PSR) & mask))
return -EINVAL;
+ __raw_writel(mask, pio + (value ? PIO_SODR : PIO_CODR));
__raw_writel(mask, pio + PIO_OER);
return 0;
}
#define CR_920T_ASYNC_MODE 0xC0000000
static u32 mpctl0_at_boot;
+static u32 bclk_div_at_boot;
static void imx_set_async_mode(void)
{
* imx_compute_mpctl - compute new PLL parameters
* @new_mpctl: pointer to location assigned by new PLL control register value
* @cur_mpctl: current PLL control register parameters
+ * @f_ref: reference source frequency Hz
* @freq: required frequency in Hz
* @relation: is one of %CPUFREQ_RELATION_L (supremum)
* and %CPUFREQ_RELATION_H (infimum)
*/
-long imx_compute_mpctl(u32 *new_mpctl, u32 cur_mpctl, unsigned long freq, int relation)
+long imx_compute_mpctl(u32 *new_mpctl, u32 cur_mpctl, u32 f_ref, unsigned long freq, int relation)
{
- u32 f_ref = (CSCR & CSCR_SYSTEM_SEL) ? 16000000 : (CLK32 * 512);
u32 mfi;
u32 mfn;
u32 mfd;
unsigned long flags;
long freq;
long sysclk;
- unsigned int bclk_div = 1;
+ unsigned int bclk_div = bclk_div_at_boot;
/*
* Some governors do not respects CPU and policy lower limits
sysclk = imx_get_system_clk();
- if (freq > sysclk + 1000000) {
- freq = imx_compute_mpctl(&mpctl0, mpctl0_at_boot, freq, relation);
+ if (freq > sysclk / bclk_div_at_boot + 1000000) {
+ freq = imx_compute_mpctl(&mpctl0, mpctl0_at_boot, CLK32 * 512, freq, relation);
if (freq < 0) {
printk(KERN_WARNING "imx: target frequency %ld Hz cannot be set\n", freq);
return -EINVAL;
if(bclk_div > 16)
bclk_div = 16;
+ if(bclk_div < bclk_div_at_boot)
+ bclk_div = bclk_div_at_boot;
}
freq = (sysclk + bclk_div / 2) / bclk_div;
}
static int __init imx_cpufreq_init(void)
{
-
+ bclk_div_at_boot = __mfld2val(CSCR_BCLK_DIV, CSCR) + 1;
mpctl0_at_boot = 0;
if((CSCR & CSCR_MPEN) &&
* f = 2 * f_ref * --------------------
* pd + 1
*/
-static unsigned int imx_decode_pll(unsigned int pll)
+static unsigned int imx_decode_pll(unsigned int pll, u32 f_ref)
{
unsigned long long ll;
unsigned long quot;
u32 mfn = pll & 0x3ff;
u32 mfd = (pll >> 16) & 0x3ff;
u32 pd = (pll >> 26) & 0xf;
- u32 f_ref = (CSCR & CSCR_SYSTEM_SEL) ? 16000000 : (CLK32 * 512);
mfi = mfi <= 5 ? 5 : mfi;
unsigned int imx_get_system_clk(void)
{
- return imx_decode_pll(SPCTL0);
+ u32 f_ref = (CSCR & CSCR_SYSTEM_SEL) ? 16000000 : (CLK32 * 512);
+
+ return imx_decode_pll(SPCTL0, f_ref);
}
EXPORT_SYMBOL(imx_get_system_clk);
unsigned int imx_get_mcu_clk(void)
{
- return imx_decode_pll(MPCTL0);
+ return imx_decode_pll(MPCTL0, CLK32 * 512);
}
EXPORT_SYMBOL(imx_get_mcu_clk);
comment "IOP32x Platform Types"
+config MACH_EP80219
+ bool
+
config MACH_GLANTANK
bool "Enable support for the IO-Data GLAN Tank"
help
config ARCH_IQ31244
bool "Enable support for EP80219/IQ31244"
+ select MACH_EP80219
help
Say Y here if you want to run your kernel on the Intel EP80219
evaluation kit for the Intel 80219 processor (a IOP321 variant)
#include <asm/arch/time.h>
/*
- * The EP80219 and IQ31244 use the same machine ID. To find out
- * which of the two we're running on, we look at the processor ID.
+ * Until March of 2007 iq31244 platforms and ep80219 platforms shared the
+ * same machine id, and the processor type was used to select board type.
+ * However this assumption breaks for an iq80219 board which is an iop219
+ * processor on an iq31244 board. The force_ep80219 flag has been added
+ * for old boot loaders using the iq31244 machine id for an ep80219 platform.
*/
+static int force_ep80219;
+
static int is_80219(void)
{
extern int processor_id;
return !!((processor_id & 0xffffffe0) == 0x69052e20);
}
+static int is_ep80219(void)
+{
+ if (machine_is_ep80219() || force_ep80219)
+ return 1;
+ else
+ return 0;
+}
+
/*
* EP80219/IQ31244 timer tick configuration.
*/
static void __init iq31244_timer_init(void)
{
- if (is_80219()) {
+ if (is_ep80219()) {
/* 33.333 MHz crystal. */
iop_init_time(200000000);
} else {
static int __init iq31244_pci_init(void)
{
- if (machine_is_iq31244()) {
+ if (is_ep80219())
+ pci_common_init(&ep80219_pci);
+ else if (machine_is_iq31244()) {
if (is_80219()) {
- pci_common_init(&ep80219_pci);
- } else {
- pci_common_init(&iq31244_pci);
+ printk("note: iq31244 board type has been selected\n");
+ printk("note: to select ep80219 operation:\n");
+ printk("\t1/ specify \"force_ep80219\" on the kernel"
+ " command line\n");
+ printk("\t2/ update boot loader to pass"
+ " the ep80219 id: %d\n", MACH_TYPE_EP80219);
}
+ pci_common_init(&iq31244_pci);
}
return 0;
platform_device_register(&iq31244_flash_device);
platform_device_register(&iq31244_serial_device);
- if (is_80219())
+ if (is_ep80219())
pm_power_off = ep80219_power_off;
}
+static int __init force_ep80219_setup(char *str)
+{
+ force_ep80219 = 1;
+ return 1;
+}
+
+__setup("force_ep80219", force_ep80219_setup);
+
MACHINE_START(IQ31244, "Intel IQ31244")
/* Maintainer: Intel Corp. */
.phys_io = IQ31244_UART,
.timer = &iq31244_timer,
.init_machine = iq31244_init_machine,
MACHINE_END
+
+/* There should have been an ep80219 machine identifier from the beginning.
+ * Boot roms older than March 2007 do not know the ep80219 machine id. Pass
+ * "force_ep80219" on the kernel command line, otherwise iq31244 operation
+ * will be selected.
+ */
+MACHINE_START(EP80219, "Intel EP80219")
+ /* Maintainer: Intel Corp. */
+ .phys_io = IQ31244_UART,
+ .io_pg_offst = ((IQ31244_UART) >> 18) & 0xfffc,
+ .boot_params = 0xa0000100,
+ .map_io = iq31244_map_io,
+ .init_irq = iop32x_init_irq,
+ .timer = &iq31244_timer,
+ .init_machine = iq31244_init_machine,
+MACHINE_END
board_a9m9750dev_init_machine();
}
-MACHINE_START(CC9P9360DEV, "Connect Core 9P 9360 on an A9M9750 Devboard")
+MACHINE_START(CC9P9360DEV, "Digi ConnectCore 9P 9360 on an A9M9750 Devboard")
.map_io = mach_cc9p9360dev_map_io,
.init_irq = mach_cc9p9360dev_init_irq,
.init_machine = mach_cc9p9360dev_init_machine,
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
#include <linux/input.h>
+#include <linux/workqueue.h>
#include <asm/hardware.h>
#include <asm/mach-types.h>
#include <asm/arch/clock.h>
#include <asm/arch/sram.h>
+#include <asm/div64.h>
#include "prcm-regs.h"
#include "memory.h"
.name = "dss2_fck",
.parent = &sys_ck, /* fixed at sys_ck or 48MHz */
.flags = CLOCK_IN_OMAP242X | CLOCK_IN_OMAP243X |
- RATE_CKCTL | CM_CORE_SEL1 | RATE_FIXED,
+ RATE_CKCTL | CM_CORE_SEL1 | RATE_FIXED |
+ DELAYED_APP,
.enable_reg = (void __iomem *)&CM_FCLKEN1_CORE,
.enable_bit = 1,
.src_offset = 13,
#include <asm/hardware.h>
#include <asm/irq.h>
#include <asm/system.h>
+#include <asm/arch/pxa-regs.h>
#include <asm/arch/irda.h>
#include <asm/arch/mmc.h>
#include <asm/arch/udc.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/irq.h>
-
-#include <asm/arch/pxa-regs.h>
#include <asm/arch/tosa.h>
#include <asm/hardware/scoop.h>
/* setup PM */
+#ifdef CONFIG_PM_H1940
memcpy(phys_to_virt(H1940_SUSPEND_RESUMEAT), h1940_pm_return, 1024);
+#endif
s3c2410_pm_init();
}
static void __init rx3715_init_machine(void)
{
+#ifdef CONFIG_PM_H1940
memcpy(phys_to_virt(H1940_SUSPEND_RESUMEAT), h1940_pm_return, 1024);
+#endif
s3c2410_pm_init();
s3c24xx_fb_set_platdata(&rx3715_lcdcfg);
static void s3c2443_irq_demux_dma(unsigned int irq, struct irq_desc *desc)
{
- s3c2443_irq_demux(IRQ_S3C2443_DMA1, 6);
+ s3c2443_irq_demux(IRQ_S3C2443_DMA0, 6);
}
#define INTMSK_DMA (1UL << (IRQ_S3C2443_DMA - IRQ_EINT0))
#include <asm/mach/map.h>
#include <asm/mach/flash.h>
#include <asm/irq.h>
+#include <asm/gpio.h>
#include "generic.h"
EXPORT_SYMBOL(gpio_direction_input);
-int gpio_direction_output(unsigned gpio)
+int gpio_direction_output(unsigned gpio, int value)
{
unsigned long flags;
return -EINVAL;
local_irq_save(flags);
+ gpio_set_value(gpio, value);
GPDR |= GPIO_GPIO(gpio);
local_irq_restore(flags);
return 0;
#
# http://www.arm.linux.org.uk/developer/machines/?action=new
#
-# Last update: Tue Jan 16 16:52:56 2007
+# Last update: Mon Apr 16 21:01:04 2007
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
bug MACH_BUG BUG 1179
mx33ads MACH_MX33ADS MX33ADS 1180
chub MACH_CHUB CHUB 1181
-gta01 MACH_GTA01 GTA01 1182
+neo1973_gta01 MACH_NEO1973_GTA01 NEO1973_GTA01 1182
w90n740 MACH_W90N740 W90N740 1183
medallion_sa2410 MACH_MEDALLION_SA2410 MEDALLION_SA2410 1184
ia_cpu_9200_2 MACH_IA_CPU_9200_2 IA_CPU_9200_2 1185
dimmrm9200 MACH_DIMMRM9200 DIMMRM9200 1186
pm9261 MACH_PM9261 PM9261 1187
-mx21 MACH_MX21 MX21 1188
ml7304 MACH_ML7304 ML7304 1189
ucp250 MACH_UCP250 UCP250 1190
intboard MACH_INTBOARD INTBOARD 1191
tecon_tmezon MACH_TECON_TMEZON TECON_TMEZON 1231
zylonite MACH_ZYLONITE ZYLONITE 1233
gene1270 MACH_GENE1270 GENE1270 1234
+zir2412 MACH_ZIR2412 ZIR2412 1235
+mx31lite MACH_MX31LITE MX31LITE 1236
+t700wx MACH_T700WX T700WX 1237
+vf100 MACH_VF100 VF100 1238
+nsb2 MACH_NSB2 NSB2 1239
+nxhmi_bb MACH_NXHMI_BB NXHMI_BB 1240
+nxhmi_re MACH_NXHMI_RE NXHMI_RE 1241
+n4100pro MACH_N4100PRO N4100PRO 1242
+sam9260 MACH_SAM9260 SAM9260 1243
+omap_treo600 MACH_OMAP_TREO600 OMAP_TREO600 1244
+indy2410 MACH_INDY2410 INDY2410 1245
+nelt_a MACH_NELT_A NELT_A 1246
+n311 MACH_N311 N311 1248
+at91sam9260vgk MACH_AT91SAM9260VGK AT91SAM9260VGK 1249
+at91leppe MACH_AT91LEPPE AT91LEPPE 1250
+at91lepccn MACH_AT91LEPCCN AT91LEPCCN 1251
+apc7100 MACH_APC7100 APC7100 1252
+stargazer MACH_STARGAZER STARGAZER 1253
+sonata MACH_SONATA SONATA 1254
+schmoogie MACH_SCHMOOGIE SCHMOOGIE 1255
+aztool MACH_AZTOOL AZTOOL 1256
+mioa701 MACH_MIOA701 MIOA701 1257
+sxni9260 MACH_SXNI9260 SXNI9260 1258
+mxc27520evb MACH_MXC27520EVB MXC27520EVB 1259
+armadillo5x0 MACH_ARMADILLO5X0 ARMADILLO5X0 1260
+mb9260 MACH_MB9260 MB9260 1261
+mb9263 MACH_MB9263 MB9263 1262
+ipac9302 MACH_IPAC9302 IPAC9302 1263
+cc9p9360js MACH_CC9P9360JS CC9P9360JS 1264
+gallium MACH_GALLIUM GALLIUM 1265
+msc2410 MACH_MSC2410 MSC2410 1266
+ghi270 MACH_GHI270 GHI270 1267
+davinci_leonardo MACH_DAVINCI_LEONARDO DAVINCI_LEONARDO 1268
+oiab MACH_OIAB OIAB 1269
+smdk6400 MACH_SMDK6400 SMDK6400 1270
+nokia_n800 MACH_NOKIA_N800 NOKIA_N800 1271
+greenphone MACH_GREENPHONE GREENPHONE 1272
+compex42x MACH_COMPEXWP18 COMPEXWP18 1273
+xmate MACH_XMATE XMATE 1274
+energizer MACH_ENERGIZER ENERGIZER 1275
+ime1 MACH_IME1 IME1 1276
+sweda_tms MACH_SWEDATMS SWEDATMS 1277
+ntnp435c MACH_NTNP435C NTNP435C 1278
+spectro2 MACH_SPECTRO2 SPECTRO2 1279
+h6039 MACH_H6039 H6039 1280
+ep80219 MACH_EP80219 EP80219 1281
+samoa_ii MACH_SAMOA_II SAMOA_II 1282
+cwmxl MACH_CWMXL CWMXL 1283
+as9200 MACH_AS9200 AS9200 1284
+sfx1149 MACH_SFX1149 SFX1149 1285
+navi010 MACH_NAVI010 NAVI010 1286
+multmdp MACH_MULTMDP MULTMDP 1287
+scb9520 MACH_SCB9520 SCB9520 1288
+htcathena MACH_HTCATHENA HTCATHENA 1289
+xp179 MACH_XP179 XP179 1290
+h4300 MACH_H4300 H4300 1291
+goramo_mlr MACH_GORAMO_MLR GORAMO_MLR 1292
+mxc30020evb MACH_MXC30020EVB MXC30020EVB 1293
+adsbitsymx MACH_ADSBITSIMX ADSBITSIMX 1294
+adsportalplus MACH_ADSPORTALPLUS ADSPORTALPLUS 1295
+mmsp2plus MACH_MMSP2PLUS MMSP2PLUS 1296
+em_x270 MACH_EM_X270 EM_X270 1297
+tpp302 MACH_TPP302 TPP302 1298
+tpp104 MACH_TPM104 TPM104 1299
+tpm102 MACH_TPM102 TPM102 1300
+tpm109 MACH_TPM109 TPM109 1301
+fbxo1 MACH_FBXO1 FBXO1 1302
+hxd8 MACH_HXD8 HXD8 1303
+neo1973_gta02 MACH_NEO1973_GTA02 NEO1973_GTA02 1304
+emtest MACH_EMTEST EMTEST 1305
+ad6900 MACH_AD6900 AD6900 1306
+europa MACH_EUROPA EUROPA 1307
+metroconnect MACH_METROCONNECT METROCONNECT 1308
+ez_s2410 MACH_EZ_S2410 EZ_S2410 1309
+ez_s2440 MACH_EZ_S2440 EZ_S2440 1310
+ez_ep9312 MACH_EZ_EP9312 EZ_EP9312 1311
+ez_ep9315 MACH_EZ_EP9315 EZ_EP9315 1312
+ez_x7 MACH_EZ_X7 EZ_X7 1313
+godotdb MACH_GODOTDB GODOTDB 1314
+mistral MACH_MISTRAL MISTRAL 1315
+msm MACH_MSM MSM 1316
+ct5910 MACH_CT5910 CT5910 1317
+ct5912 MACH_CT5912 CT5912 1318
+hynet_ine MACH_HYNET_INE HYNET_INE 1319
+hynet_app MACH_HYNET_APP HYNET_APP 1320
+msm7200 MACH_MSM7200 MSM7200 1321
+msm7600 MACH_MSM7600 MSM7600 1322
+ceb255 MACH_CEB255 CEB255 1323
+ciel MACH_CIEL CIEL 1324
+slm5650 MACH_SLM5650 SLM5650 1325
+at91sam9rlek MACH_AT91SAM9RLEK AT91SAM9RLEK 1326
+comtech_router MACH_COMTECH_ROUTER COMTECH_ROUTER 1327
+sbc2410x MACH_SBC2410X SBC2410X 1328
+at4x0bd MACH_AT4X0BD AT4X0BD 1329
}
EXPORT_SYMBOL(gpio_direction_input);
-int gpio_direction_output(unsigned int gpio)
+int gpio_direction_output(unsigned int gpio, int value)
{
struct pio_device *pio;
unsigned int pin;
if (!pio)
return -ENODEV;
+ gpio_set_value(gpio, value);
+
pin = gpio & 0x1f;
pio_writel(pio, OER, 1 << pin);
if ((err = pcibios_enable_resources(dev, mask)) < 0)
return err;
- return pcibios_enable_irq(dev);
+ if (!dev->msi_enabled)
+ pcibios_enable_irq(dev);
+ return 0;
}
int pcibios_assign_resources(void)
if ((err = pcibios_enable_resources(dev, mask)) < 0)
return err;
- pcibios_enable_irq(dev);
+ if (!dev->msi_enabled)
+ pcibios_enable_irq(dev);
return 0;
}
config VMI
bool "VMI Paravirt-ops support"
- depends on PARAVIRT
+ depends on PARAVIRT && !COMPAT_VDSO
help
VMI provides a paravirtualized interface to the VMware ESX server
(it could be used by other hypervisors in theory too, but is not
jmp _m_s
check_vesa:
+#ifdef CONFIG_FIRMWARE_EDID
+ leaw modelist+1024, %di
+ movw $0x4f00, %ax
+ int $0x10
+ cmpw $0x004f, %ax
+ jnz setbad
+
+ movw 4(%di), %ax
+ movw %ax, vbe_version
+#endif
leaw modelist+1024, %di
subb $VIDEO_FIRST_VESA>>8, %bh
movw %bx, %cx # Get mode information structure
rep
stosl
+ cmpw $0x0200, vbe_version # only do EDID on >= VBE2.0
+ jl no_edid
+
pushw %es # save ES
xorw %di, %di # Report Capability
pushw %di
svga_prefix: .byte VIDEO_FIRST_BIOS>>8 # Default prefix for BIOS modes
graphic_mode: .byte 0 # Graphic mode with a linear frame buffer
dac_size: .byte 6 # DAC bit depth
+vbe_version: .word 0 # VBE bios version
# Status messages
keymsg: .ascii "Press <RETURN> to see video modes available, "
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.20-git8
-# Tue Feb 13 11:25:18 2007
+# Linux kernel version: 2.6.21-rc3
+# Wed Mar 7 15:29:47 2007
#
CONFIG_X86_32=y
CONFIG_GENERIC_TIME=y
+CONFIG_CLOCKSOURCE_WATCHDOG=y
+CONFIG_GENERIC_CLOCKEVENTS=y
+CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_SEMAPHORE_SLEEPERS=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
# CONFIG_IPC_NS is not set
+CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set
# CONFIG_CPUSETS is not set
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_RELAY is not set
+CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
#
# Processor type and features
#
+# CONFIG_TICK_ONESHOT is not set
+# CONFIG_NO_HZ is not set
+# CONFIG_HIGH_RES_TIMERS is not set
CONFIG_SMP=y
# CONFIG_X86_PC is not set
# CONFIG_X86_ELAN is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
-# CONFIG_ACPI_HOTKEY is not set
CONFIG_ACPI_FAN=y
# CONFIG_ACPI_DOCK is not set
-# CONFIG_ACPI_BAY is not set
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_THERMAL=y
# CONFIG_ACPI_ASUS is not set
# CONFIG_X86_CPUFREQ_NFORCE2 is not set
# CONFIG_X86_LONGRUN is not set
# CONFIG_X86_LONGHAUL is not set
+# CONFIG_X86_E_POWERSAVER is not set
#
# shared options
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
-# CONFIG_INET_TUNNEL is not set
+CONFIG_INET_TUNNEL=y
CONFIG_INET_XFRM_MODE_TRANSPORT=y
CONFIG_INET_XFRM_MODE_TUNNEL=y
# CONFIG_INET_XFRM_MODE_BEET is not set
#
# Plug and Play support
#
-# CONFIG_PNP is not set
+CONFIG_PNP=y
+# CONFIG_PNP_DEBUG is not set
+
+#
+# Protocols
+#
+CONFIG_PNPACPI=y
#
# Block devices
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024
-CONFIG_BLK_DEV_INITRD=y
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_IBM_ASM is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
+# CONFIG_SONY_LAPTOP is not set
#
# ATA/ATAPI/MFM/RLL support
#
CONFIG_IDE_GENERIC=y
# CONFIG_BLK_DEV_CMD640 is not set
+# CONFIG_BLK_DEV_IDEPNP is not set
CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_IDEPCI_SHARE_IRQ is not set
# CONFIG_BLK_DEV_OFFBOARD is not set
# CONFIG_SATA_VITESSE is not set
# CONFIG_SATA_INIC162X is not set
CONFIG_SATA_INTEL_COMBINED=y
+CONFIG_SATA_ACPI=y
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
+# CONFIG_NET_SB1000 is not set
#
# ARCnet devices
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
+#
+# Multifunction device drivers
+#
+# CONFIG_MFD_SM501 is not set
+
#
# Multimedia devices
#
#
# Graphics support
#
-CONFIG_FIRMWARE_EDID=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
# CONFIG_FB is not set
#
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=128
CONFIG_VIDEO_SELECT=y
CONFIG_DUMMY_CONSOLE=y
-# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
+# CONFIG_USB_BERRY_CHARGE is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
+# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
#
CONFIG_LOG_BUF_SHIFT=18
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
+# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_FORCED_INLINING is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_LKDTM is not set
+# CONFIG_FAULT_INJECTION is not set
CONFIG_EARLY_PRINTK=y
CONFIG_DEBUG_STACKOVERFLOW=y
# CONFIG_DEBUG_STACK_USAGE is not set
#include <asm/alternative.h>
#include <asm/sections.h>
-static int no_replacement = 0;
static int smp_alt_once = 0;
static int debug_alternative = 0;
-static int __init noreplacement_setup(char *s)
-{
- no_replacement = 1;
- return 1;
-}
static int __init bootonly(char *str)
{
smp_alt_once = 1;
return 1;
}
-__setup("noreplacement", noreplacement_setup);
__setup("smp-alt-boot", bootonly);
__setup("debug-alternative", debug_alt);
struct smp_alt_module *smp;
unsigned long flags;
- if (no_replacement)
- return;
-
if (smp_alt_once) {
if (boot_cpu_has(X86_FEATURE_UP))
alternatives_smp_unlock(locks, locks_end,
struct smp_alt_module *item;
unsigned long flags;
- if (no_replacement || smp_alt_once)
+ if (smp_alt_once)
return;
spin_lock_irqsave(&smp_alt, flags);
return;
#endif
- if (no_replacement || smp_alt_once)
+ if (smp_alt_once)
return;
BUG_ON(!smp && (num_online_cpus() > 1));
void __init alternative_instructions(void)
{
unsigned long flags;
- if (no_replacement) {
- printk(KERN_INFO "(SMP-)alternatives turned off\n");
- free_init_pages("SMP alternatives",
- (unsigned long)__smp_alt_begin,
- (unsigned long)__smp_alt_end);
- return;
- }
local_irq_save(flags);
apply_alternatives(__alt_instructions, __alt_instructions_end);
#include <linux/clockchips.h>
#include <linux/acpi_pmtmr.h>
#include <linux/module.h>
+#include <linux/dmi.h>
#include <asm/atomic.h>
#include <asm/smp.h>
/* Local APIC timer verification ok */
static int local_apic_timer_verify_ok;
+/* Disable local APIC timer from the kernel commandline or via dmi quirk */
+static int local_apic_timer_disabled;
+/* Local APIC timer works in C2 */
+int local_apic_timer_c2_ok;
+EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
/*
* Debug level, exported for io_apic.c
void (*real_handler)(struct clock_event_device *dev);
unsigned long deltaj;
long delta, deltapm;
+ int pm_referenced = 0;
+
+ if (boot_cpu_has(X86_FEATURE_LAPIC_TIMER_BROKEN))
+ local_apic_timer_disabled = 1;
+
+ /*
+ * The local apic timer can be disabled via the kernel
+ * commandline or from the test above. Register the lapic
+ * timer as a dummy clock event source on SMP systems, so the
+ * broadcast mechanism is used. On UP systems simply ignore it.
+ */
+ if (local_apic_timer_disabled) {
+ /* No broadcast on UP ! */
+ if (num_possible_cpus() > 1)
+ setup_APIC_timer();
+ return;
+ }
apic_printk(APIC_VERBOSE, "Using local APIC timer interrupts.\n"
"calibrating APIC timer ...\n");
/* Let the interrupts run */
local_irq_enable();
- while(lapic_cal_loops <= LAPIC_CAL_LOOPS);
+ while (lapic_cal_loops <= LAPIC_CAL_LOOPS)
+ cpu_relax();
local_irq_disable();
"%lu (%ld)\n", (unsigned long) res, delta);
delta = (long) res;
}
+ pm_referenced = 1;
}
/* Calculate the scaled math multiplication factor */
calibration_result / (1000000 / HZ),
calibration_result % (1000000 / HZ));
-
- apic_printk(APIC_VERBOSE, "... verify APIC timer\n");
-
- /*
- * Setup the apic timer manually
- */
local_apic_timer_verify_ok = 1;
- levt->event_handler = lapic_cal_handler;
- lapic_timer_setup(CLOCK_EVT_MODE_PERIODIC, levt);
- lapic_cal_loops = -1;
- /* Let the interrupts run */
- local_irq_enable();
+ /* We trust the pm timer based calibration */
+ if (!pm_referenced) {
+ apic_printk(APIC_VERBOSE, "... verify APIC timer\n");
- while(lapic_cal_loops <= LAPIC_CAL_LOOPS);
+ /*
+ * Setup the apic timer manually
+ */
+ levt->event_handler = lapic_cal_handler;
+ lapic_timer_setup(CLOCK_EVT_MODE_PERIODIC, levt);
+ lapic_cal_loops = -1;
- local_irq_disable();
+ /* Let the interrupts run */
+ local_irq_enable();
- /* Stop the lapic timer */
- lapic_timer_setup(CLOCK_EVT_MODE_SHUTDOWN, levt);
+ while(lapic_cal_loops <= LAPIC_CAL_LOOPS)
+ cpu_relax();
- local_irq_enable();
+ local_irq_disable();
- /* Jiffies delta */
- deltaj = lapic_cal_j2 - lapic_cal_j1;
- apic_printk(APIC_VERBOSE, "... jiffies delta = %lu\n", deltaj);
+ /* Stop the lapic timer */
+ lapic_timer_setup(CLOCK_EVT_MODE_SHUTDOWN, levt);
- /* Check, if the PM timer is available */
- deltapm = lapic_cal_pm2 - lapic_cal_pm1;
- apic_printk(APIC_VERBOSE, "... PM timer delta = %ld\n", deltapm);
+ local_irq_enable();
- local_apic_timer_verify_ok = 0;
+ /* Jiffies delta */
+ deltaj = lapic_cal_j2 - lapic_cal_j1;
+ apic_printk(APIC_VERBOSE, "... jiffies delta = %lu\n", deltaj);
- if (deltapm) {
- if (deltapm > (pm_100ms - pm_thresh) &&
- deltapm < (pm_100ms + pm_thresh)) {
- apic_printk(APIC_VERBOSE, "... PM timer result ok\n");
- /* Check, if the jiffies result is consistent */
- if (deltaj < LAPIC_CAL_LOOPS-2 ||
- deltaj > LAPIC_CAL_LOOPS+2) {
- /*
- * Not sure, what we can do about this one.
- * When high resultion timers are active
- * and the lapic timer does not stop in C3
- * we are fine. Otherwise more trouble might
- * be waiting. -- tglx
- */
- printk(KERN_WARNING "Global event device %s "
- "has wrong frequency "
- "(%lu ticks instead of %d)\n",
- global_clock_event->name, deltaj,
- LAPIC_CAL_LOOPS);
- }
- local_apic_timer_verify_ok = 1;
- }
- } else {
/* Check, if the jiffies result is consistent */
- if (deltaj >= LAPIC_CAL_LOOPS-2 &&
- deltaj <= LAPIC_CAL_LOOPS+2) {
+ if (deltaj >= LAPIC_CAL_LOOPS-2 && deltaj <= LAPIC_CAL_LOOPS+2)
apic_printk(APIC_VERBOSE, "... jiffies result ok\n");
- local_apic_timer_verify_ok = 1;
- }
- }
+ else
+ local_apic_timer_verify_ok = 0;
+ } else
+ local_irq_enable();
if (!local_apic_timer_verify_ok) {
printk(KERN_WARNING
}
early_param("nolapic", parse_nolapic);
+static int __init parse_disable_lapic_timer(char *arg)
+{
+ local_apic_timer_disabled = 1;
+ return 0;
+}
+early_param("nolapic_timer", parse_disable_lapic_timer);
+
+static int __init parse_lapic_timer_c2_ok(char *arg)
+{
+ local_apic_timer_c2_ok = 1;
+ return 0;
+}
+early_param("lapic_timer_c2_ok", parse_lapic_timer_c2_ok);
+
static int __init apic_set_verbosity(char *str)
{
if (strcmp("debug", str) == 0)
extern void vide(void);
__asm__(".align 4\nvide: ret");
+#define ENABLE_C1E_MASK 0x18000000
+#define CPUID_PROCESSOR_SIGNATURE 1
+#define CPUID_XFAM 0x0ff00000
+#define CPUID_XFAM_K8 0x00000000
+#define CPUID_XFAM_10H 0x00100000
+#define CPUID_XFAM_11H 0x00200000
+#define CPUID_XMOD 0x000f0000
+#define CPUID_XMOD_REV_F 0x00040000
+
+/* AMD systems with C1E don't have a working lAPIC timer. Check for that. */
+static __cpuinit int amd_apic_timer_broken(void)
+{
+ u32 lo, hi;
+ u32 eax = cpuid_eax(CPUID_PROCESSOR_SIGNATURE);
+ switch (eax & CPUID_XFAM) {
+ case CPUID_XFAM_K8:
+ if ((eax & CPUID_XMOD) < CPUID_XMOD_REV_F)
+ break;
+ case CPUID_XFAM_10H:
+ case CPUID_XFAM_11H:
+ rdmsr(MSR_K8_ENABLE_C1E, lo, hi);
+ if (lo & ENABLE_C1E_MASK)
+ return 1;
+ break;
+ default:
+ /* err on the side of caution */
+ return 1;
+ }
+ return 0;
+}
+
static void __cpuinit init_amd(struct cpuinfo_x86 *c)
{
u32 l, h;
if (cpuid_eax(0x80000000) >= 0x80000006)
num_cache_leaves = 3;
+
+ if (amd_apic_timer_broken())
+ set_bit(X86_FEATURE_LAPIC_TIMER_BROKEN, c->x86_capability);
}
static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)
NULL, (void *)&pr);
/* Check ACPI support for C3 state */
- if (pr != NULL && longhaul_version != TYPE_LONGHAUL_V1) {
+ if (pr != NULL && longhaul_version == TYPE_POWERSAVER) {
cx = &pr->power.states[ACPI_STATE_C3];
if (cx->address > 0 && cx->latency <= 1000) {
longhaul_flags |= USE_ACPI_C3;
#include <linux/errno.h>
#include <linux/hpet.h>
#include <linux/init.h>
+#include <linux/sysdev.h>
+#include <linux/pm.h>
#include <asm/hpet.h>
#include <asm/io.h>
cnt += delta;
hpet_writel(cnt, HPET_T0_CMP);
- return ((long)(hpet_readl(HPET_COUNTER) - cnt ) > 0);
+ return ((long)(hpet_readl(HPET_COUNTER) - cnt ) > 0) ? -ETIME : 0;
}
/*
out_nohpet:
iounmap(hpet_virt_address);
hpet_virt_address = NULL;
+ boot_hpet_disable = 1;
return 0;
}
return IRQ_HANDLED;
}
#endif
+
+
+/*
+ * Suspend/resume part
+ */
+
+#ifdef CONFIG_PM
+
+static int hpet_suspend(struct sys_device *sys_device, pm_message_t state)
+{
+ unsigned long cfg = hpet_readl(HPET_CFG);
+
+ cfg &= ~(HPET_CFG_ENABLE|HPET_CFG_LEGACY);
+ hpet_writel(cfg, HPET_CFG);
+
+ return 0;
+}
+
+static int hpet_resume(struct sys_device *sys_device)
+{
+ unsigned int id;
+
+ hpet_start_counter();
+
+ id = hpet_readl(HPET_ID);
+
+ if (id & HPET_ID_LEGSUP)
+ hpet_enable_int();
+
+ return 0;
+}
+
+static struct sysdev_class hpet_class = {
+ set_kset_name("hpet"),
+ .suspend = hpet_suspend,
+ .resume = hpet_resume,
+};
+
+static struct sys_device hpet_device = {
+ .id = 0,
+ .cls = &hpet_class,
+};
+
+
+static __init int hpet_register_sysfs(void)
+{
+ int err;
+
+ if (!is_hpet_capable())
+ return 0;
+
+ err = sysdev_class_register(&hpet_class);
+
+ if (!err) {
+ err = sysdev_register(&hpet_device);
+ if (err)
+ sysdev_class_unregister(&hpet_class);
+ }
+
+ return err;
+}
+
+device_initcall(hpet_register_sysfs);
+
+#endif
#endif
EXPORT_SYMBOL(csum_partial);
+
+EXPORT_SYMBOL(_proxy_pda);
outb(LATCH >> 8 , PIT_CH0); /* MSB */
break;
- case CLOCK_EVT_MODE_ONESHOT:
+ /*
+ * Avoid unnecessary state transitions, as it confuses
+ * Geode / Cyrix based boxen.
+ */
case CLOCK_EVT_MODE_SHUTDOWN:
+ if (evt->mode == CLOCK_EVT_MODE_UNUSED)
+ break;
case CLOCK_EVT_MODE_UNUSED:
+ if (evt->mode == CLOCK_EVT_MODE_SHUTDOWN)
+ break;
+ case CLOCK_EVT_MODE_ONESHOT:
/* One shot setup */
outb_p(0x38, PIT_MODE);
udelay(10);
return 0;
}
-int __init irqbalance_disable(char *str)
+int __devinit irqbalance_disable(char *str)
{
irqbalance_disabled = 1;
return 1;
return error;
}
+static int apply_microcode_on_cpu(int cpu)
+{
+ struct cpuinfo_x86 *c = cpu_data + cpu;
+ struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+ cpumask_t old;
+ unsigned int val[2];
+ int err = 0;
+
+ if (!uci->mc)
+ return -EINVAL;
+
+ old = current->cpus_allowed;
+ set_cpus_allowed(current, cpumask_of_cpu(cpu));
+
+ /* Check if the microcode we have in memory matches the CPU */
+ if (c->x86_vendor != X86_VENDOR_INTEL || c->x86 < 6 ||
+ cpu_has(c, X86_FEATURE_IA64) || uci->sig != cpuid_eax(0x00000001))
+ err = -EINVAL;
+
+ if (!err && ((c->x86_model >= 5) || (c->x86 > 6))) {
+ /* get processor flags from MSR 0x17 */
+ rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]);
+ if (uci->pf != (1 << ((val[1] >> 18) & 7)))
+ err = -EINVAL;
+ }
+
+ if (!err) {
+ wrmsr(MSR_IA32_UCODE_REV, 0, 0);
+ /* see notes above for revision 1.07. Apparent chip bug */
+ sync_core();
+ /* get the current revision from MSR 0x8B */
+ rdmsr(MSR_IA32_UCODE_REV, val[0], val[1]);
+ if (uci->rev != val[1])
+ err = -EINVAL;
+ }
+
+ if (!err)
+ apply_microcode(cpu);
+ else
+ printk(KERN_ERR "microcode: Could not apply microcode to CPU%d:"
+ " sig=0x%x, pf=0x%x, rev=0x%x\n",
+ cpu, uci->sig, uci->pf, uci->rev);
+
+ set_cpus_allowed(current, old);
+ return err;
+}
+
static void microcode_init_cpu(int cpu)
{
cpumask_t old;
set_cpus_allowed(current, cpumask_of_cpu(cpu));
mutex_lock(µcode_mutex);
collect_cpu_info(cpu);
- if (uci->valid && system_state == SYSTEM_RUNNING)
+ if (uci->valid && system_state == SYSTEM_RUNNING &&
+ !suspend_cpu_hotplug)
cpu_request_microcode(cpu);
mutex_unlock(µcode_mutex);
set_cpus_allowed(current, old);
return 0;
pr_debug("Microcode:CPU %d added\n", cpu);
- memset(uci, 0, sizeof(*uci));
+ /* If suspend_cpu_hotplug is set, the system is resuming and we should
+ * use the data from before the suspend.
+ */
+ if (suspend_cpu_hotplug) {
+ err = apply_microcode_on_cpu(cpu);
+ if (err)
+ microcode_fini_cpu(cpu);
+ }
+ if (!uci->valid)
+ memset(uci, 0, sizeof(*uci));
err = sysfs_create_group(&sys_dev->kobj, &mc_attr_group);
if (err)
return err;
- microcode_init_cpu(cpu);
+ if (!uci->valid)
+ microcode_init_cpu(cpu);
+
return 0;
}
if (!cpu_online(cpu))
return 0;
pr_debug("Microcode:CPU %d removed\n", cpu);
- microcode_fini_cpu(cpu);
+ /* If suspend_cpu_hotplug is set, the system is suspending and we should
+ * keep the microcode in memory for the resume.
+ */
+ if (!suspend_cpu_hotplug)
+ microcode_fini_cpu(cpu);
sysfs_remove_group(&sys_dev->kobj, &mc_attr_group);
return 0;
}
* different subsystems this reservation system just tries to coordinate
* things a little
*/
-static DEFINE_PER_CPU(unsigned long, perfctr_nmi_owner);
-static DEFINE_PER_CPU(unsigned long, evntsel_nmi_owner[3]);
-
-static cpumask_t backtrace_mask = CPU_MASK_NONE;
/* this number is calculated from Intel's MSR_P4_CRU_ESCR5 register and it's
* offset from MSR_P4_BSU_ESCR0. It will be the max for all platforms (for now)
*/
#define NMI_MAX_COUNTER_BITS 66
+#define NMI_MAX_COUNTER_LONGS BITS_TO_LONGS(NMI_MAX_COUNTER_BITS)
+
+static DEFINE_PER_CPU(unsigned long, perfctr_nmi_owner[NMI_MAX_COUNTER_LONGS]);
+static DEFINE_PER_CPU(unsigned long, evntsel_nmi_owner[NMI_MAX_COUNTER_LONGS]);
+static cpumask_t backtrace_mask = CPU_MASK_NONE;
/* nmi_active:
* >0: the lapic NMI watchdog is active, but can be disabled
* <0: the lapic NMI watchdog has not been set up, and cannot
/* checks for a bit availability (hack for oprofile) */
int avail_to_resrv_perfctr_nmi_bit(unsigned int counter)
{
+ int cpu;
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
-
- return (!test_bit(counter, &__get_cpu_var(perfctr_nmi_owner)));
+ for_each_possible_cpu (cpu) {
+ if (test_bit(counter, &per_cpu(perfctr_nmi_owner, cpu)[0]))
+ return 0;
+ }
+ return 1;
}
/* checks the an msr for availability */
int avail_to_resrv_perfctr_nmi(unsigned int msr)
{
unsigned int counter;
+ int cpu;
counter = nmi_perfctr_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- return (!test_bit(counter, &__get_cpu_var(perfctr_nmi_owner)));
+ for_each_possible_cpu (cpu) {
+ if (test_bit(counter, &per_cpu(perfctr_nmi_owner, cpu)[0]))
+ return 0;
+ }
+ return 1;
}
-int reserve_perfctr_nmi(unsigned int msr)
+static int __reserve_perfctr_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_perfctr_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- if (!test_and_set_bit(counter, &__get_cpu_var(perfctr_nmi_owner)))
+ if (!test_and_set_bit(counter, &per_cpu(perfctr_nmi_owner, cpu)[0]))
return 1;
return 0;
}
-void release_perfctr_nmi(unsigned int msr)
+static void __release_perfctr_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_perfctr_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- clear_bit(counter, &__get_cpu_var(perfctr_nmi_owner));
+ clear_bit(counter, &per_cpu(perfctr_nmi_owner, cpu)[0]);
}
-int reserve_evntsel_nmi(unsigned int msr)
+int reserve_perfctr_nmi(unsigned int msr)
+{
+ int cpu, i;
+ for_each_possible_cpu (cpu) {
+ if (!__reserve_perfctr_nmi(cpu, msr)) {
+ for_each_possible_cpu (i) {
+ if (i >= cpu)
+ break;
+ __release_perfctr_nmi(i, msr);
+ }
+ return 0;
+ }
+ }
+ return 1;
+}
+
+void release_perfctr_nmi(unsigned int msr)
+{
+ int cpu;
+ for_each_possible_cpu (cpu) {
+ __release_perfctr_nmi(cpu, msr);
+ }
+}
+
+int __reserve_evntsel_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_evntsel_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- if (!test_and_set_bit(counter, &__get_cpu_var(evntsel_nmi_owner)[0]))
+ if (!test_and_set_bit(counter, &per_cpu(evntsel_nmi_owner, cpu)[0]))
return 1;
return 0;
}
-void release_evntsel_nmi(unsigned int msr)
+static void __release_evntsel_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_evntsel_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- clear_bit(counter, &__get_cpu_var(evntsel_nmi_owner)[0]);
+ clear_bit(counter, &per_cpu(evntsel_nmi_owner, cpu)[0]);
+}
+
+int reserve_evntsel_nmi(unsigned int msr)
+{
+ int cpu, i;
+ for_each_possible_cpu (cpu) {
+ if (!__reserve_evntsel_nmi(cpu, msr)) {
+ for_each_possible_cpu (i) {
+ if (i >= cpu)
+ break;
+ __release_evntsel_nmi(i, msr);
+ }
+ return 0;
+ }
+ }
+ return 1;
+}
+
+void release_evntsel_nmi(unsigned int msr)
+{
+ int cpu;
+ for_each_possible_cpu (cpu) {
+ __release_evntsel_nmi(cpu, msr);
+ }
}
static __cpuinit inline int nmi_known_cpu(void)
unsigned int *prev_nmi_count;
int cpu;
- /* Enable NMI watchdog for newer systems.
- Probably safe on most older systems too, but let's be careful.
- IBM ThinkPads use INT10 inside SMM and that allows early NMI inside SMM
- which hangs the system. Disable watchdog for all thinkpads */
- if (nmi_watchdog == NMI_DEFAULT && dmi_get_year(DMI_BIOS_DATE) >= 2004 &&
- !dmi_name_in_vendors("ThinkPad"))
- nmi_watchdog = NMI_LOCAL_APIC;
-
if ((nmi_watchdog == NMI_NONE) || (nmi_watchdog == NMI_DEFAULT))
return 0;
for_each_possible_cpu(cpu)
prev_nmi_count[cpu] = per_cpu(irq_stat, cpu).__nmi_count;
local_irq_enable();
- mdelay((10*1000)/nmi_hz); // wait 10 ticks
+ mdelay((20*1000)/nmi_hz); // wait 20 ticks
for_each_possible_cpu(cpu) {
#ifdef CONFIG_SMP
perfctr_msr = MSR_K7_PERFCTR0;
evntsel_msr = MSR_K7_EVNTSEL0;
- if (!reserve_perfctr_nmi(perfctr_msr))
+ if (!__reserve_perfctr_nmi(-1, perfctr_msr))
goto fail;
- if (!reserve_evntsel_nmi(evntsel_msr))
+ if (!__reserve_evntsel_nmi(-1, evntsel_msr))
goto fail1;
wrmsrl(perfctr_msr, 0UL);
wd->check_bit = 1ULL<<63;
return 1;
fail1:
- release_perfctr_nmi(perfctr_msr);
+ __release_perfctr_nmi(-1, perfctr_msr);
fail:
return 0;
}
wrmsr(wd->evntsel_msr, 0, 0);
- release_evntsel_nmi(wd->evntsel_msr);
- release_perfctr_nmi(wd->perfctr_msr);
+ __release_evntsel_nmi(-1, wd->evntsel_msr);
+ __release_perfctr_nmi(-1, wd->perfctr_msr);
}
#define P6_EVNTSEL0_ENABLE (1 << 22)
perfctr_msr = MSR_P6_PERFCTR0;
evntsel_msr = MSR_P6_EVNTSEL0;
- if (!reserve_perfctr_nmi(perfctr_msr))
+ if (!__reserve_perfctr_nmi(-1, perfctr_msr))
goto fail;
- if (!reserve_evntsel_nmi(evntsel_msr))
+ if (!__reserve_evntsel_nmi(-1, evntsel_msr))
goto fail1;
wrmsrl(perfctr_msr, 0UL);
wd->check_bit = 1ULL<<39;
return 1;
fail1:
- release_perfctr_nmi(perfctr_msr);
+ __release_perfctr_nmi(-1, perfctr_msr);
fail:
return 0;
}
wrmsr(wd->evntsel_msr, 0, 0);
- release_evntsel_nmi(wd->evntsel_msr);
- release_perfctr_nmi(wd->perfctr_msr);
+ __release_evntsel_nmi(-1, wd->evntsel_msr);
+ __release_perfctr_nmi(-1, wd->perfctr_msr);
}
/* Note that these events don't tick when the CPU idles. This means
cccr_val = P4_CCCR_OVF_PMI1 | P4_CCCR_ESCR_SELECT(4);
}
- if (!reserve_perfctr_nmi(perfctr_msr))
+ if (!__reserve_perfctr_nmi(-1, perfctr_msr))
goto fail;
- if (!reserve_evntsel_nmi(evntsel_msr))
+ if (!__reserve_evntsel_nmi(-1, evntsel_msr))
goto fail1;
evntsel = P4_ESCR_EVENT_SELECT(0x3F)
wd->check_bit = 1ULL<<39;
return 1;
fail1:
- release_perfctr_nmi(perfctr_msr);
+ __release_perfctr_nmi(-1, perfctr_msr);
fail:
return 0;
}
wrmsr(wd->cccr_msr, 0, 0);
wrmsr(wd->evntsel_msr, 0, 0);
- release_evntsel_nmi(wd->evntsel_msr);
- release_perfctr_nmi(wd->perfctr_msr);
+ __release_evntsel_nmi(-1, wd->evntsel_msr);
+ __release_perfctr_nmi(-1, wd->perfctr_msr);
}
#define ARCH_PERFMON_NMI_EVENT_SEL ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL
perfctr_msr = MSR_ARCH_PERFMON_PERFCTR0;
evntsel_msr = MSR_ARCH_PERFMON_EVENTSEL0;
- if (!reserve_perfctr_nmi(perfctr_msr))
+ if (!__reserve_perfctr_nmi(-1, perfctr_msr))
goto fail;
- if (!reserve_evntsel_nmi(evntsel_msr))
+ if (!__reserve_evntsel_nmi(-1, evntsel_msr))
goto fail1;
wrmsrl(perfctr_msr, 0UL);
wd->check_bit = 1ULL << (eax.split.bit_width - 1);
return 1;
fail1:
- release_perfctr_nmi(perfctr_msr);
+ __release_perfctr_nmi(-1, perfctr_msr);
fail:
return 0;
}
return;
wrmsr(wd->evntsel_msr, 0, 0);
- release_evntsel_nmi(wd->evntsel_msr);
- release_perfctr_nmi(wd->perfctr_msr);
+ __release_evntsel_nmi(-1, wd->evntsel_msr);
+ __release_perfctr_nmi(-1, wd->perfctr_msr);
}
void setup_apic_nmi_watchdog (void *unused)
#include "mach_timer.h"
+static int tsc_enabled;
+
/*
* On some systems the TSC frequency does not
* change with the cpu frequency. So we need
/*
* Fall back to jiffies if there's no TSC available:
*/
- if (unlikely(tsc_disable))
+ if (unlikely(!tsc_enabled))
/* No locking but a rare wrong value is not a big deal: */
return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ);
{
if (!tsc_unstable) {
tsc_unstable = 1;
+ tsc_enabled = 0;
/* Can be called before registration */
if (clocksource_tsc.mult)
clocksource_change_rating(&clocksource_tsc, 0);
if (check_tsc_unstable()) {
clocksource_tsc.rating = 0;
clocksource_tsc.flags &= ~CLOCK_SOURCE_IS_CONTINUOUS;
- }
+ } else
+ tsc_enabled = 1;
+
clocksource_register(&clocksource_tsc);
return;
*/
#include <linux/module.h>
-#include <linux/license.h>
#include <linux/cpu.h>
#include <linux/bootmem.h>
#include <linux/mm.h>
(((VROMLONGFUNC *)(rom->func)) (arg))
static struct vrom_header *vmi_rom;
-static int license_gplok;
static int disable_pge;
static int disable_pse;
static int disable_sep;
void (*flush_tlb)(int);
void (*set_initial_ap_state)(int, int);
void (*halt)(void);
+ void (*set_lazy_mode)(int mode);
} vmi_ops;
/* XXX move this to alternative.h */
}
#endif
+static void vmi_set_lazy_mode(int mode)
+{
+ static DEFINE_PER_CPU(int, lazy_mode);
+
+ if (!vmi_ops.set_lazy_mode)
+ return;
+
+ /* Modes should never nest or overlap */
+ BUG_ON(__get_cpu_var(lazy_mode) && !(mode == PARAVIRT_LAZY_NONE ||
+ mode == PARAVIRT_LAZY_FLUSH));
+
+ if (mode == PARAVIRT_LAZY_FLUSH) {
+ vmi_ops.set_lazy_mode(0);
+ vmi_ops.set_lazy_mode(__get_cpu_var(lazy_mode));
+ } else {
+ vmi_ops.set_lazy_mode(mode);
+ __get_cpu_var(lazy_mode) = mode;
+ }
+}
+
static inline int __init check_vmi_rom(struct vrom_header *rom)
{
struct pci_header *pci;
rom->api_version_maj, rom->api_version_min,
pci->rom_version_maj, pci->rom_version_min);
- license_gplok = license_is_gpl_compatible(license);
- if (!license_gplok) {
- printk(KERN_WARNING "VMI: ROM license '%s' taints kernel... "
- "inlining disabled\n",
- license);
- add_taint(TAINT_PROPRIETARY_MODULE);
- }
+ /* Don't allow BSD/MIT here for now because we don't want to end up
+ with any binary only shim layers */
+ if (strcmp(license, "GPL") && strcmp(license, "GPL v2")) {
+ printk(KERN_WARNING "VMI: Non GPL license `%s' found for ROM. Not used.\n",
+ license);
+ return 0;
+ }
+
return 1;
}
do { \
reloc = call_vrom_long_func(vmi_rom, get_reloc, \
VMI_CALL_##vmicall); \
- if (rel->type != VMI_RELOCATION_NONE) { \
- BUG_ON(rel->type != VMI_RELOCATION_CALL_REL); \
+ if (rel->type == VMI_RELOCATION_CALL_REL) \
paravirt_ops.opname = (void *)rel->eip; \
- } else if (rel->type == VMI_RELOCATION_NOP) \
+ else if (rel->type == VMI_RELOCATION_NOP) \
paravirt_ops.opname = (void *)vmi_nop; \
+ else if (rel->type != VMI_RELOCATION_NONE) \
+ printk(KERN_WARNING "VMI: Unknown relocation " \
+ "type %d for " #vmicall"\n",\
+ rel->type); \
} while (0)
/*
para_wrap(load_esp0, vmi_load_esp0, set_kernel_stack, UpdateKernelStack);
para_fill(set_iopl_mask, SetIOPLMask);
para_fill(io_delay, IODelay);
- para_fill(set_lazy_mode, SetLazyMode);
+ para_wrap(set_lazy_mode, vmi_set_lazy_mode, set_lazy_mode, SetLazyMode);
/* user and kernel flush are just handled with different flags to FlushTLB */
para_wrap(flush_tlb_user, vmi_flush_tlb_user, flush_tlb, FlushTLB);
OUTPUT_ARCH(i386)
ENTRY(phys_startup_32)
jiffies = jiffies_64;
-_proxy_pda = 0;
+_proxy_pda = 1;
PHDRS {
text PT_LOAD FLAGS(5); /* R_E */
#include <linux/blkdev.h>
#include <linux/module.h>
#include <linux/backing-dev.h>
+#include <linux/interrupt.h>
#include <asm/uaccess.h>
#include <asm/mmx.h>
#ifndef CONFIG_X86_WP_WORKS_OK
if (unlikely(boot_cpu_data.wp_works_ok == 0) &&
((unsigned long )to) < TASK_SIZE) {
+ /*
+ * When we are in an atomic section (see
+ * mm/filemap.c:file_read_actor), return the full
+ * length to take the slow path.
+ */
+ if (in_atomic())
+ return n;
+
/*
* CPU does not honor the WP bit when writing
* from supervisory mode, and due to preemption or SMP,
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
+ arch_flush_lazy_mmu_mode();
return (void*) vaddr;
}
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
set_pte(kmap_pte-idx, pfn_pte(pfn, kmap_prot));
+ arch_flush_lazy_mmu_mode();
return (void*) vaddr;
}
DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 2950"),
},
},
+ {
+ .callback = set_bf_sort,
+ .ident = "Dell PowerEdge R900",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R900"),
+ },
+ },
{
.callback = set_bf_sort,
.ident = "HP ProLiant BL20p G3",
if ((err = pcibios_enable_resources(dev, mask)) < 0)
return err;
- return pcibios_enable_irq(dev);
+ if (!dev->msi_enabled)
+ return pcibios_enable_irq(dev);
+ return 0;
}
void pcibios_disable_device (struct pci_dev *dev)
{
- if (pcibios_disable_irq)
+ if (!dev->msi_enabled && pcibios_disable_irq)
pcibios_disable_irq(dev);
}
bool
select PCI if (!IA64_HP_SIM)
select ACPI if (!IA64_HP_SIM)
+ select PM if (!IA64_HP_SIM)
default y
help
The Itanium Processor Family is Intel's 64-bit successor to
nd = (struct ia64_mca_notify_die *)args->err;
/* Reason code 1 means machine check rendezous*/
- if ((val == DIE_INIT_MONARCH_ENTER || DIE_INIT_SLAVE_ENTER) &&
+ if ((val == DIE_INIT_MONARCH_ENTER || val == DIE_INIT_SLAVE_ENTER) &&
nd->sos->rv_rc == 1)
return NOTIFY_DONE;
{
struct msi_msg msg;
unsigned long dest_phys_id;
- unsigned int irq, vector;
+ int irq, vector;
irq = create_irq();
if (irq < 0)
set_irq_msi(irq, desc);
dest_phys_id = cpu_physical_id(first_cpu(cpu_online_map));
- vector = irq;
+ vector = irq_to_vector(irq);
msg.address_hi = 0;
msg.address_lo =
static int ia64_msi_retrigger_irq(unsigned int irq)
{
- unsigned int vector = irq;
+ unsigned int vector = irq_to_vector(irq);
ia64_resend_irq(vector);
return 1;
"features : %s\n"
"cpu number : %lu\n"
"cpu regs : %u\n"
- "cpu MHz : %lu.%06lu\n"
+ "cpu MHz : %lu.%03lu\n"
"itc MHz : %lu.%06lu\n"
"BogoMIPS : %lu.%02lu\n",
cpunum, c->vendor, c->family, c->model,
.show = show_cpuinfo
};
-static char brandname[128];
+#define MAX_BRANDS 8
+static char brandname[MAX_BRANDS][128];
static char * __cpuinit
get_model_name(__u8 family, __u8 model)
{
+ static int overflow;
char brand[128];
+ int i;
memcpy(brand, "Unknown", 8);
if (ia64_pal_get_brand_info(brand)) {
case 2: memcpy(brand, "Madison up to 9M cache", 23); break;
}
}
- if (brandname[0] == '\0')
- return strcpy(brandname, brand);
- else if (strcmp(brandname, brand) == 0)
- return brandname;
- else
- return kstrdup(brand, GFP_KERNEL);
+ for (i = 0; i < MAX_BRANDS; i++)
+ if (strcmp(brandname[i], brand) == 0)
+ return brandname[i];
+ for (i = 0; i < MAX_BRANDS; i++)
+ if (brandname[i][0] == '\0')
+ return strcpy(brandname[i], brand);
+ if (overflow++ == 0)
+ printk(KERN_ERR
+ "%s: Table overflow. Some processor model information will be missing\n",
+ __FUNCTION__);
+ return "Unknown";
}
static void __cpuinit
smp_callin (void)
{
int cpuid, phys_id, itc_master;
+ struct cpuinfo_ia64 *last_cpuinfo, *this_cpuinfo;
extern void ia64_init_itm(void);
extern volatile int time_keeper_id;
* Get our bogomips.
*/
ia64_init_itm();
- calibrate_delay();
+
+ /*
+ * Delay calibration can be skipped if new processor is identical to the
+ * previous processor.
+ */
+ last_cpuinfo = cpu_data(cpuid - 1);
+ this_cpuinfo = local_cpu_data;
+ if (last_cpuinfo->itc_freq != this_cpuinfo->itc_freq ||
+ last_cpuinfo->proc_freq != this_cpuinfo->proc_freq ||
+ last_cpuinfo->features != this_cpuinfo->features ||
+ last_cpuinfo->revision != this_cpuinfo->revision ||
+ last_cpuinfo->family != this_cpuinfo->family ||
+ last_cpuinfo->archrev != this_cpuinfo->archrev ||
+ last_cpuinfo->model != this_cpuinfo->model)
+ calibrate_delay();
local_cpu_data->loops_per_jiffy = loops_per_jiffy;
#ifdef CONFIG_IA32_SUPPORT
/* physical address where the bootmem map is located */
unsigned long bootmap_start;
-/**
- * find_max_pfn - adjust the maximum page number callback
- * @start: start of range
- * @end: end of range
- * @arg: address of pointer to global max_pfn variable
- *
- * Passed as a callback function to efi_memmap_walk() to determine the highest
- * available page frame number in the system.
- */
-int
-find_max_pfn (unsigned long start, unsigned long end, void *arg)
-{
- unsigned long *max_pfnp = arg, pfn;
-
- pfn = (PAGE_ALIGN(end - 1) - PAGE_OFFSET) >> PAGE_SHIFT;
- if (pfn > *max_pfnp)
- *max_pfnp = pfn;
- return 0;
-}
-
/**
* find_bootmap_location - callback to find a memory area for the bootmap
* @start: start of region
reserve_memory();
/* first find highest page frame number */
- max_pfn = 0;
- efi_memmap_walk(find_max_pfn, &max_pfn);
-
+ min_low_pfn = ~0UL;
+ max_low_pfn = 0;
+ efi_memmap_walk(find_max_min_low_pfn, NULL);
+ max_pfn = max_low_pfn;
/* how many bytes to cover all the pages */
bootmap_size = bootmem_bootmap_pages(max_pfn) << PAGE_SHIFT;
if (bootmap_start == ~0UL)
panic("Cannot find %ld bytes for bootmap\n", bootmap_size);
- bootmap_size = init_bootmem(bootmap_start >> PAGE_SHIFT, max_pfn);
+ bootmap_size = init_bootmem_node(NODE_DATA(0),
+ (bootmap_start >> PAGE_SHIFT), 0, max_pfn);
/* Free all available memory, then mark bootmem-map as being in use. */
efi_memmap_walk(filter_rsvd_memory, free_bootmem);
bdp->node_low_pfn = max(epfn, bdp->node_low_pfn);
}
- min_low_pfn = min(min_low_pfn, bdp->node_boot_start>>PAGE_SHIFT);
- max_low_pfn = max(max_low_pfn, bdp->node_low_pfn);
-
return 0;
}
/* These actually end up getting called by call_pernode_memory() */
efi_memmap_walk(filter_rsvd_memory, build_node_maps);
efi_memmap_walk(filter_rsvd_memory, find_pernode_space);
+ efi_memmap_walk(find_max_min_low_pfn, NULL);
for_each_online_node(node)
if (mem_data[node].bootmem_data.node_low_pfn) {
if (stack_size > MAX_USER_STACK_SIZE)
stack_size = MAX_USER_STACK_SIZE;
- current->thread.rbs_bot = STACK_TOP - stack_size;
+ current->thread.rbs_bot = PAGE_ALIGN(current->mm->start_stack - stack_size);
}
/*
return 0;
}
+int
+find_max_min_low_pfn (unsigned long start, unsigned long end, void *arg)
+{
+ unsigned long pfn_start, pfn_end;
+#ifdef CONFIG_FLATMEM
+ pfn_start = (PAGE_ALIGN(__pa(start))) >> PAGE_SHIFT;
+ pfn_end = (PAGE_ALIGN(__pa(end - 1))) >> PAGE_SHIFT;
+#else
+ pfn_start = GRANULEROUNDDOWN(__pa(start)) >> PAGE_SHIFT;
+ pfn_end = GRANULEROUNDUP(__pa(end - 1)) >> PAGE_SHIFT;
+#endif
+ min_low_pfn = min(min_low_pfn, pfn_start);
+ max_low_pfn = max(max_low_pfn, pfn_end);
+ return 0;
+}
+
/*
* Boot command-line option "nolwsys" can be used to disable the use of any light-weight
* system call handler. When this option is in effect, all fsyscalls will end up bubbling
if (ret < 0)
return ret;
- return acpi_pci_irq_enable(dev);
+ if (!dev->msi_enabled)
+ return acpi_pci_irq_enable(dev);
+ return 0;
}
void
pcibios_disable_device (struct pci_dev *dev)
{
BUG_ON(atomic_read(&dev->enable_cnt));
- acpi_pci_irq_disable(dev);
+ if (!dev->msi_enabled)
+ acpi_pci_irq_disable(dev);
}
void
* There are errors which still need to be cleaned up by
* hubiio_crb_error_handler
*/
- mod_timer(recovery_timer, HZ * 5);
+ mod_timer(recovery_timer, jiffies + (HZ * 5));
BTE_PRINTK(("eh:%p:%d Marked Giving up\n", err_nodepda,
smp_processor_id()));
return 1;
icrbd.ii_icrb0_d_regval =
REMOTE_HUB_L(nasid, IIO_ICRB_D(i));
if (icrbd.d_bteop) {
- mod_timer(recovery_timer, HZ * 5);
+ mod_timer(recovery_timer, jiffies + (HZ * 5));
BTE_PRINTK(("eh:%p:%d Valid %d, Giving up\n",
err_nodepda, smp_processor_id(),
i));
status = BTE_LNSTAT_LOAD(bte);
if ((status & IBLS_ERROR) || !(status & IBLS_BUSY))
continue;
- mod_timer(recovery_timer, HZ * 5);
+ mod_timer(recovery_timer, jiffies + (HZ * 5));
BTE_PRINTK(("eh:%p:%d Marked Giving up\n", err_nodepda,
smp_processor_id()));
return 1;
addr = ((addr << 4) >> 4) | __IA64_UNCACHED_OFFSET;
dev->resource[idx].start = addr;
dev->resource[idx].end = addr + size;
+
+ /*
+ * if it's already in the device structure, remove it before
+ * inserting
+ */
+ if (dev->resource[idx].parent && dev->resource[idx].parent->child)
+ release_resource(&dev->resource[idx]);
+
if (dev->resource[idx].flags & IORESOURCE_IO)
- dev->resource[idx].parent = &ioport_resource;
+ insert_resource(&ioport_resource, &dev->resource[idx]);
else
- dev->resource[idx].parent = &iomem_resource;
+ insert_resource(&iomem_resource, &dev->resource[idx]);
/* If ROM, mark as shadowed in PROM */
if (idx == PCI_ROM_RESOURCE)
dev->resource[idx].flags |= IORESOURCE_ROM_BIOS_COPY;
continue; /* not PCI interconnect */
if (if_pci.translation & PCDP_PCI_TRANS_IOPORT)
- vga_console_iobase =
- if_pci.ioport_tra | __IA64_UNCACHED_OFFSET;
+ vga_console_iobase = if_pci.ioport_tra;
if (if_pci.translation & PCDP_PCI_TRANS_MMIO)
vga_console_membase =
* bus containing the VGA console.
*/
if (vga_console_iobase) {
- io_space[0].mmio_base = vga_console_iobase;
+ io_space[0].mmio_base =
+ (unsigned long) ioremap(vga_console_iobase, 0);
io_space[0].sparse = 0;
}
}
/*
- * If we're mapping for MSI, set the MSI bit in the ATE
+ * If we're mapping for MSI, set the MSI bit in the ATE. If it's a
+ * TIOCP based pci bus, we also need to set the PIO bit in the ATE.
*/
- if (dma_flags & SN_DMA_MSI)
+ if (dma_flags & SN_DMA_MSI) {
ate |= PCI32_ATE_MSI;
+ if (IS_TIOCP_SOFT(pcibus_info))
+ ate |= PCI32_ATE_PIO;
+ }
ate_write(pcibus_info, ate_index, ate_count, ate);
select R5000_CPU_SCACHE
select SYS_HAS_CPU_R5000
select SYS_SUPPORTS_32BIT_KERNEL
- select SYS_SUPPORTS_64BIT_KERNEL if EXPERIMENTAL
+ select SYS_SUPPORTS_64BIT_KERNEL if BROKEN
select SYS_SUPPORTS_LITTLE_ENDIAN
select GENERIC_HARDIRQS_NO__DO_IRQ
select SOC_AU1500
select SYS_SUPPORTS_LITTLE_ENDIAN
-config PNX8550_V2PCI
- bool "Philips PNX8550 based Viper2-PCI board"
- select PNX8550
- select SYS_SUPPORTS_LITTLE_ENDIAN
-
config PNX8550_JBS
bool "Philips PNX8550 based JBS board"
select PNX8550
select SYS_SUPPORTS_LITTLE_ENDIAN
select ARCH_SPARSEMEM_ENABLE
select GENERIC_HARDIRQS_NO__DO_IRQ
+ select NR_CPUS_DEFAULT_1
+ select SYS_SUPPORTS_SMP
help
Qemu is a software emulator which among other architectures also
can simulate a MIPS32 4Kc system. This patch adds support for the
select CPU_MIPSR2_IRQ_VI
select CPU_MIPSR2_SRS
select MIPS_MT
+ select NR_CPUS_DEFAULT_2
select SMP
select SYS_SUPPORTS_SMP
help
select CPU_MIPSR2_IRQ_VI
select CPU_MIPSR2_SRS
select MIPS_MT
- select NR_CPUS_DEFAULT_2
select NR_CPUS_DEFAULT_8
select SMP
select SYS_SUPPORTS_SMP
config MIPS_MT_SMTC_INSTANT_REPLAY
bool "Low-latency Dispatch of Deferred SMTC IPIs"
- depends on MIPS_MT_SMTC
+ depends on MIPS_MT_SMTC && !PREEMPT
default y
help
SMTC pseudo-interrupts between TCs are deferred and queued
config SYS_SUPPORTS_SMP
bool
+config NR_CPUS_DEFAULT_1
+ bool
+
config NR_CPUS_DEFAULT_2
bool
config NR_CPUS
int "Maximum number of CPUs (2-64)"
- range 2 64
+ range 1 64 if NR_CPUS_DEFAULT_1
depends on SMP
+ default "1" if NR_CPUS_DEFAULT_1
default "2" if NR_CPUS_DEFAULT_2
default "4" if NR_CPUS_DEFAULT_4
default "8" if NR_CPUS_DEFAULT_8
This allows you to specify the maximum number of CPUs which this
kernel will support. The maximum supported value is 32 for 32-bit
kernel and 64 for 64-bit kernels; the minimum value which makes
- sense is 2.
+ sense is 1 for Qemu (useful only for kernel debugging purposes)
+ and 2 for all others.
This is purely to save memory - each supported CPU adds
- approximately eight kilobytes to the kernel image.
+ approximately eight kilobytes to the kernel image. For best
+ performance should round up your number of processors to the next
+ power of two.
#
# Timer Interrupt Frequency Configuration
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
CONFIG_DDB5477=y
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
CONFIG_MOMENCO_OCELOT_C=y
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
CONFIG_MOMENCO_OCELOT_G=y
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
CONFIG_PNX8550_JBS=y
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
CONFIG_PNX8550_STB810=y
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-CONFIG_PNX8550_V2PCI=y
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_cause() & read_c0_status();
+ unsigned int pending = read_c0_cause() & read_c0_status() & ST0_IM;
if (pending & STATUSF_IP7)
do_IRQ(CPU_IRQ_BASE + 7);
# CONFIG_MOMENCO_OCELOT_C is not set
# CONFIG_MOMENCO_OCELOT_G is not set
# CONFIG_MIPS_XXS1500 is not set
-# CONFIG_PNX8550_V2PCI is not set
# CONFIG_PNX8550_JBS is not set
# CONFIG_PNX8550_STB810 is not set
# CONFIG_DDB5477 is not set
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_status() & read_c0_cause();
+ unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
if (pending & STATUSF_IP7)
do_IRQ(CPU_IRQ_BASE + 7);
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_status() & read_c0_cause();
+ unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
if (pending & STATUSF_IP4) /* int2 hardware line (timer) */
do_IRQ(4);
char **arg = (char **) fw_arg1;
char **env = (char **) fw_arg2;
struct callvectors *cv = (struct callvectors *) fw_arg3;
- uint32_t tmp;
int i;
/* save the PROM vectors for debugging use */
static void __init setup_l3cache(unsigned long size);
/* setup code for a handoff from a version 1 PMON 2000 PROM */
-void PMON_v1_setup()
+static void PMON_v1_setup(void)
{
/* A wired TLB entry for the GT64120A and the serial port. The
GT64120A is going to be hit on every IRQ anyway - there's
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_status() & read_c0_cause();
+ unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
if (pending & STATUSF_IP7)
do_IRQ(WRPPMC_MIPS_TIMER_IRQ); /* CPU Compare/Count internal timer */
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_cause() & read_c0_status() & ST0_IM;
+ unsigned int pending = read_c0_cause() & read_c0_status();
if (pending & IE_IRQ5)
write_c0_compare(0);
* aligned and should be uncached to avoid cache flushing after every
* update.
*/
- vdma_pagetable_start = alloc_bootmem_low_pages(VDMA_PGTBL_SIZE);
+ vdma_pagetable_start =
+ (unsigned long) alloc_bootmem_low_pages(VDMA_PGTBL_SIZE);
if (!vdma_pagetable_start)
BUG();
dma_cache_wback_inv(vdma_pagetable_start, VDMA_PGTBL_SIZE);
SAVE_AT
SAVE_TEMP
LONG_L v0, PT_STATUS(sp)
- and v0, 1
+#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
+ and v0, ST0_IEP
+#else
+ and v0, ST0_IE
+#endif
beqz v0, 1f
jal trace_hardirqs_on
b 2f
.align 5
NESTED(handle_int, PT_SIZE, sp)
+#ifdef CONFIG_TRACE_IRQFLAGS
+ /*
+ * Check to see if the interrupted code has just disabled
+ * interrupts and ignore this interrupt for now if so.
+ *
+ * local_irq_disable() disables interrupts and then calls
+ * trace_hardirqs_off() to track the state. If an interrupt is taken
+ * after interrupts are disabled but before the state is updated
+ * it will appear to restore_all that it is incorrectly returning with
+ * interrupts disabled
+ */
+ .set push
+ .set noat
+ mfc0 k0, CP0_STATUS
+#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
+ and k0, ST0_IEP
+ bnez k0, 1f
+
+ mfc0 k0, EP0_EPC
+ .set noreorder
+ j k0
+ rfe
+#else
+ and k0, ST0_IE
+ bnez k0, 1f
+
+ eret
+#endif
+1:
+ .set pop
+#endif
SAVE_ALL
CLI
TRACE_IRQS_OFF
* during service by SMTC kernel, we also want to
* pass the IM value to be cleared.
*/
-EXPORT(except_vec_vi_mori)
+FEXPORT(except_vec_vi_mori)
ori a0, $0, 0
#endif /* CONFIG_MIPS_MT_SMTC */
-EXPORT(except_vec_vi_lui)
+FEXPORT(except_vec_vi_lui)
lui v0, 0 /* Patched */
j except_vec_vi_handler
-EXPORT(except_vec_vi_ori)
+FEXPORT(except_vec_vi_ori)
ori v0, 0 /* Patched */
.set pop
END(except_vec_vi)
_ehb
#endif /* CONFIG_MIPS_MT_SMTC */
CLI
+#ifdef CONFIG_TRACE_IRQFLAGS
+ move s0, v0
+#ifdef CONFIG_MIPS_MT_SMTC
+ move s1, a0
+#endif
TRACE_IRQS_OFF
+#ifdef CONFIG_MIPS_MT_SMTC
+ move a0, s1
+#endif
+ move v0, s0
+#endif
LONG_L s0, TI_REGS($28)
LONG_S sp, TI_REGS($28)
#define MTSP_SYSCALL_GETTIME (MTSP_SYSCALL_BASE + 7)
#define MTSP_SYSCALL_PIPEFREQ (MTSP_SYSCALL_BASE + 8)
#define MTSP_SYSCALL_GETTOD (MTSP_SYSCALL_BASE + 9)
+#define MTSP_SYSCALL_IOCTL (MTSP_SYSCALL_BASE + 10)
#define MTSP_O_RDONLY 0x0000
#define MTSP_O_WRONLY 0x0001
{ MTSP_SYSCALL_CLOSE, __NR_close },
{ MTSP_SYSCALL_READ, __NR_read },
{ MTSP_SYSCALL_WRITE, __NR_write },
- { MTSP_SYSCALL_LSEEK32, __NR_lseek }
+ { MTSP_SYSCALL_LSEEK32, __NR_lseek },
+ { MTSP_SYSCALL_IOCTL, __NR_ioctl }
};
static int sp_syscall(int num, int arg0, int arg1, int arg2, int arg3)
struct mtsp_syscall_generic generic;
struct mtsp_syscall_ret ret;
struct kspd_notifications *n;
+ unsigned long written;
+ mm_segment_t old_fs;
struct timeval tv;
struct timezone tz;
int cmd;
ret.retval = -1;
- if (!rtlx_read(RTLX_CHANNEL_SYSIO, &sc, sizeof(struct mtsp_syscall), 0)) {
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+
+ if (!rtlx_read(RTLX_CHANNEL_SYSIO, &sc, sizeof(struct mtsp_syscall))) {
+ set_fs(old_fs);
printk(KERN_ERR "Expected request but nothing to read\n");
return;
}
size = sc.size;
if (size) {
- if (!rtlx_read(RTLX_CHANNEL_SYSIO, &generic, size, 0)) {
+ if (!rtlx_read(RTLX_CHANNEL_SYSIO, &generic, size)) {
+ set_fs(old_fs);
printk(KERN_ERR "Expected request but nothing to read\n");
return;
}
if (vpe_getuid(SP_VPE))
sp_setfsuidgid( 0, 0);
- if ((rtlx_write(RTLX_CHANNEL_SYSIO, &ret, sizeof(struct mtsp_syscall_ret), 0))
- < sizeof(struct mtsp_syscall_ret))
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ written = rtlx_write(RTLX_CHANNEL_SYSIO, &ret, sizeof(ret));
+ set_fs(old_fs);
+ if (written < sizeof(ret))
printk("KSPD: sp_work_handle_request failed to send to SP\n");
}
return ret;
}
+#ifdef CONFIG_SYSVIPC
+
asmlinkage long
sys32_ipc (u32 call, int first, int second, int third, u32 ptr, u32 fifth)
{
return err;
}
+#else
+
+asmlinkage long
+sys32_ipc (u32 call, int first, int second, int third, u32 ptr, u32 fifth)
+{
+ return -ENOSYS;
+}
+
+#endif /* CONFIG_SYSVIPC */
+
#ifdef CONFIG_MIPS32_N32
asmlinkage long sysn32_semctl(int semid, int semnum, int cmd, u32 arg)
{
*/
LEAF(_restore_fp_context)
EX lw t0, SC_FPC_CSR(a0)
-
- /* Fail if the CSR has exceptions pending */
- srl t1, t0, 5
- and t1, t0
- andi t1, 0x1f << 7
- bnez t1, fault
- nop
-
#ifdef CONFIG_64BIT
EX ldc1 $f1, SC_FPREGS+8(a0)
EX ldc1 $f3, SC_FPREGS+24(a0)
LEAF(_restore_fp_context32)
/* Restore an o32 sigcontext. */
EX lw t0, SC32_FPC_CSR(a0)
-
- /* Fail if the CSR has exceptions pending */
- srl t1, t0, 5
- and t1, t0
- andi t1, 0x1f << 7
- bnez t1, fault
- nop
-
EX ldc1 $f0, SC32_FPREGS+0(a0)
EX ldc1 $f2, SC32_FPREGS+16(a0)
EX ldc1 $f4, SC32_FPREGS+32(a0)
wait_queue_head_t rt_queue;
wait_queue_head_t lx_queue;
atomic_t in_open;
+ struct mutex mutex;
} channel_wqs[RTLX_CHANNELS];
static struct irqaction irq;
int rtlx_open(int index, int can_sleep)
{
- volatile struct rtlx_info **p;
+ struct rtlx_info **p;
struct rtlx_channel *chan;
enum rtlx_state state;
int ret = 0;
}
}
+ smp_rmb();
if (*p == NULL) {
if (can_sleep) {
- __wait_event_interruptible(channel_wqs[index].lx_queue,
- *p != NULL,
- ret);
- if (ret)
+ DEFINE_WAIT(wait);
+
+ for (;;) {
+ prepare_to_wait(&channel_wqs[index].lx_queue, &wait, TASK_INTERRUPTIBLE);
+ smp_rmb();
+ if (*p != NULL)
+ break;
+ if (!signal_pending(current)) {
+ schedule();
+ continue;
+ }
+ ret = -ERESTARTSYS;
goto out_fail;
+ }
+ finish_wait(&channel_wqs[index].lx_queue, &wait);
} else {
printk(" *vpe_get_shared is NULL. "
"Has an SP program been loaded?\n");
return write_spacefree(chan->rt_read, chan->rt_write, chan->buffer_size);
}
-static inline void copy_to(void *dst, void *src, size_t count, int user)
-{
- if (user)
- copy_to_user(dst, src, count);
- else
- memcpy(dst, src, count);
-}
-
-static inline void copy_from(void *dst, void *src, size_t count, int user)
+ssize_t rtlx_read(int index, void __user *buff, size_t count, int user)
{
- if (user)
- copy_from_user(dst, src, count);
- else
- memcpy(dst, src, count);
-}
-
-ssize_t rtlx_read(int index, void *buff, size_t count, int user)
-{
- size_t fl = 0L;
+ size_t lx_write, fl = 0L;
struct rtlx_channel *lx;
+ unsigned long failed;
if (rtlx == NULL)
return -ENOSYS;
lx = &rtlx->channel[index];
+ mutex_lock(&channel_wqs[index].mutex);
+ smp_rmb();
+ lx_write = lx->lx_write;
+
/* find out how much in total */
count = min(count,
- (size_t)(lx->lx_write + lx->buffer_size - lx->lx_read)
+ (size_t)(lx_write + lx->buffer_size - lx->lx_read)
% lx->buffer_size);
/* then how much from the read pointer onwards */
- fl = min( count, (size_t)lx->buffer_size - lx->lx_read);
+ fl = min(count, (size_t)lx->buffer_size - lx->lx_read);
- copy_to(buff, &lx->lx_buffer[lx->lx_read], fl, user);
+ failed = copy_to_user(buff, lx->lx_buffer + lx->lx_read, fl);
+ if (failed)
+ goto out;
/* and if there is anything left at the beginning of the buffer */
- if ( count - fl )
- copy_to (buff + fl, lx->lx_buffer, count - fl, user);
+ if (count - fl)
+ failed = copy_to_user(buff + fl, lx->lx_buffer, count - fl);
- /* update the index */
- lx->lx_read += count;
- lx->lx_read %= lx->buffer_size;
+out:
+ count -= failed;
+
+ smp_wmb();
+ lx->lx_read = (lx->lx_read + count) % lx->buffer_size;
+ smp_wmb();
+ mutex_unlock(&channel_wqs[index].mutex);
return count;
}
-ssize_t rtlx_write(int index, void *buffer, size_t count, int user)
+ssize_t rtlx_write(int index, const void __user *buffer, size_t count, int user)
{
struct rtlx_channel *rt;
+ size_t rt_read;
size_t fl;
if (rtlx == NULL)
rt = &rtlx->channel[index];
+ mutex_lock(&channel_wqs[index].mutex);
+ smp_rmb();
+ rt_read = rt->rt_read;
+
/* total number of bytes to copy */
count = min(count,
- (size_t)write_spacefree(rt->rt_read, rt->rt_write,
- rt->buffer_size));
+ (size_t)write_spacefree(rt_read, rt->rt_write, rt->buffer_size));
/* first bit from write pointer to the end of the buffer, or count */
fl = min(count, (size_t) rt->buffer_size - rt->rt_write);
- copy_from (&rt->rt_buffer[rt->rt_write], buffer, fl, user);
+ failed = copy_from_user(rt->rt_buffer + rt->rt_write, buffer, fl);
+ if (failed)
+ goto out;
/* if there's any left copy to the beginning of the buffer */
- if( count - fl )
- copy_from (rt->rt_buffer, buffer + fl, count - fl, user);
+ if (count - fl) {
+ failed = copy_from_user(rt->rt_buffer, buffer + fl, count - fl);
+ }
+
+out:
+ count -= cailed;
- rt->rt_write += count;
- rt->rt_write %= rt->buffer_size;
+ smp_wmb();
+ rt->rt_write = (rt->rt_write + count) % rt->buffer_size;
+ smp_wmb();
+ mutex_unlock(&channel_wqs[index].mutex);
- return(count);
+ return count;
}
return 0; // -EAGAIN makes cat whinge
}
- return rtlx_read(minor, buffer, count, 1);
+ return rtlx_read(minor, buffer, count);
}
static ssize_t file_write(struct file *file, const char __user * buffer,
return ret;
}
- return rtlx_write(minor, (void *)buffer, count, 1);
+ return rtlx_write(minor, buffer, count);
}
static const struct file_operations rtlx_fops = {
init_waitqueue_head(&channel_wqs[i].rt_queue);
init_waitqueue_head(&channel_wqs[i].lx_queue);
atomic_set(&channel_wqs[i].in_open, 0);
+ mutex_init(&channel_wqs[i].mutex);
dev = device_create(mt_class, NULL, MKDEV(major, i),
"%s%d", module_name, i);
*/
extern int install_sigtramp(unsigned int __user *tramp, unsigned int syscall);
+/* Check and clear pending FPU exceptions in saved CSR */
+extern int fpcsr_pending(unsigned int __user *fpcsr);
+
+/* Make sure we will not lose FPU ownership */
+#ifdef CONFIG_PREEMPT
+#define lock_fpu_owner() preempt_disable()
+#define unlock_fpu_owner() preempt_enable()
+#else
+#define lock_fpu_owner() pagefault_disable()
+#define unlock_fpu_owner() pagefault_enable()
+#endif
+
#endif /* __SIGNAL_COMMON_H */
#include <linux/ptrace.h>
#include <linux/unistd.h>
#include <linux/compiler.h>
+#include <linux/uaccess.h>
#include <asm/abi.h>
#include <asm/asm.h>
#include <asm/cacheflush.h>
#include <asm/fpu.h>
#include <asm/sim.h>
-#include <asm/uaccess.h>
#include <asm/ucontext.h>
#include <asm/cpu-features.h>
#include <asm/war.h>
/*
* Helper routines
*/
+static int protected_save_fp_context(struct sigcontext __user *sc)
+{
+ int err;
+ while (1) {
+ lock_fpu_owner();
+ own_fpu_inatomic(1);
+ err = save_fp_context(sc); /* this might fail */
+ unlock_fpu_owner();
+ if (likely(!err))
+ break;
+ /* touch the sigcontext and try again */
+ err = __put_user(0, &sc->sc_fpregs[0]) |
+ __put_user(0, &sc->sc_fpregs[31]) |
+ __put_user(0, &sc->sc_fpc_csr);
+ if (err)
+ break; /* really bad sigcontext */
+ }
+ return err;
+}
+
+static int protected_restore_fp_context(struct sigcontext __user *sc)
+{
+ int err, tmp;
+ while (1) {
+ lock_fpu_owner();
+ own_fpu_inatomic(0);
+ err = restore_fp_context(sc); /* this might fail */
+ unlock_fpu_owner();
+ if (likely(!err))
+ break;
+ /* touch the sigcontext and try again */
+ err = __get_user(tmp, &sc->sc_fpregs[0]) |
+ __get_user(tmp, &sc->sc_fpregs[31]) |
+ __get_user(tmp, &sc->sc_fpc_csr);
+ if (err)
+ break; /* really bad sigcontext */
+ }
+ return err;
+}
+
int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc)
{
int err = 0;
int i;
+ unsigned int used_math;
err |= __put_user(regs->cp0_epc, &sc->sc_pc);
err |= __put_user(rddsp(DSP_MASK), &sc->sc_dsp);
}
- err |= __put_user(!!used_math(), &sc->sc_used_math);
+ used_math = !!used_math();
+ err |= __put_user(used_math, &sc->sc_used_math);
- if (used_math()) {
+ if (used_math) {
/*
* Save FPU state to signal context. Signal handler
* will "inherit" current FPU state.
*/
- preempt_disable();
+ err |= protected_save_fp_context(sc);
+ }
+ return err;
+}
- if (!is_fpu_owner()) {
- own_fpu();
- restore_fp(current);
- }
- err |= save_fp_context(sc);
+int fpcsr_pending(unsigned int __user *fpcsr)
+{
+ int err, sig = 0;
+ unsigned int csr, enabled;
- preempt_enable();
+ err = __get_user(csr, fpcsr);
+ enabled = FPU_CSR_UNI_X | ((csr & FPU_CSR_ALL_E) << 5);
+ /*
+ * If the signal handler set some FPU exceptions, clear it and
+ * send SIGFPE.
+ */
+ if (csr & enabled) {
+ csr &= ~enabled;
+ err |= __put_user(csr, fpcsr);
+ sig = SIGFPE;
}
- return err;
+ return err ?: sig;
+}
+
+static int
+check_and_restore_fp_context(struct sigcontext __user *sc)
+{
+ int err, sig;
+
+ err = sig = fpcsr_pending(&sc->sc_fpc_csr);
+ if (err > 0)
+ err = 0;
+ err |= protected_restore_fp_context(sc);
+ return err ?: sig;
}
int restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc)
err |= __get_user(used_math, &sc->sc_used_math);
conditional_used_math(used_math);
- preempt_disable();
-
- if (used_math()) {
+ if (used_math) {
/* restore fpu context if we have used it before */
- own_fpu();
- err |= restore_fp_context(sc);
+ if (!err)
+ err = check_and_restore_fp_context(sc);
} else {
/* signal handler may have used FPU. Give it up. */
- lose_fpu();
+ lose_fpu(0);
}
- preempt_enable();
-
return err;
}
{
struct sigframe __user *frame;
sigset_t blocked;
+ int sig;
frame = (struct sigframe __user *) regs.regs[29];
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
- if (restore_sigcontext(®s, &frame->sf_sc))
+ sig = restore_sigcontext(®s, &frame->sf_sc);
+ if (sig < 0)
goto badframe;
+ else if (sig)
+ force_sig(sig, current);
/*
* Don't let your children do this ...
struct rt_sigframe __user *frame;
sigset_t set;
stack_t st;
+ int sig;
frame = (struct rt_sigframe __user *) regs.regs[29];
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
- if (restore_sigcontext(®s, &frame->rs_uc.uc_mcontext))
+ sig = restore_sigcontext(®s, &frame->rs_uc.uc_mcontext);
+ if (sig < 0)
goto badframe;
+ else if (sig)
+ force_sig(sig, current);
if (__copy_from_user(&st, &frame->rs_uc.uc_stack, sizeof(st)))
goto badframe;
#include <linux/compat.h>
#include <linux/suspend.h>
#include <linux/compiler.h>
+#include <linux/uaccess.h>
#include <asm/abi.h>
#include <asm/asm.h>
#include <linux/bitops.h>
#include <asm/cacheflush.h>
#include <asm/sim.h>
-#include <asm/uaccess.h>
#include <asm/ucontext.h>
#include <asm/system.h>
#include <asm/fpu.h>
/*
* sigcontext handlers
*/
+static int protected_save_fp_context32(struct sigcontext32 __user *sc)
+{
+ int err;
+ while (1) {
+ lock_fpu_owner();
+ own_fpu_inatomic(1);
+ err = save_fp_context32(sc); /* this might fail */
+ unlock_fpu_owner();
+ if (likely(!err))
+ break;
+ /* touch the sigcontext and try again */
+ err = __put_user(0, &sc->sc_fpregs[0]) |
+ __put_user(0, &sc->sc_fpregs[31]) |
+ __put_user(0, &sc->sc_fpc_csr);
+ if (err)
+ break; /* really bad sigcontext */
+ }
+ return err;
+}
+
+static int protected_restore_fp_context32(struct sigcontext32 __user *sc)
+{
+ int err, tmp;
+ while (1) {
+ lock_fpu_owner();
+ own_fpu_inatomic(0);
+ err = restore_fp_context32(sc); /* this might fail */
+ unlock_fpu_owner();
+ if (likely(!err))
+ break;
+ /* touch the sigcontext and try again */
+ err = __get_user(tmp, &sc->sc_fpregs[0]) |
+ __get_user(tmp, &sc->sc_fpregs[31]) |
+ __get_user(tmp, &sc->sc_fpc_csr);
+ if (err)
+ break; /* really bad sigcontext */
+ }
+ return err;
+}
+
static int setup_sigcontext32(struct pt_regs *regs,
struct sigcontext32 __user *sc)
{
int err = 0;
int i;
+ u32 used_math;
err |= __put_user(regs->cp0_epc, &sc->sc_pc);
err |= __put_user(mflo3(), &sc->sc_lo3);
}
- err |= __put_user(!!used_math(), &sc->sc_used_math);
+ used_math = !!used_math();
+ err |= __put_user(used_math, &sc->sc_used_math);
- if (used_math()) {
+ if (used_math) {
/*
* Save FPU state to signal context. Signal handler
* will "inherit" current FPU state.
*/
- preempt_disable();
-
- if (!is_fpu_owner()) {
- own_fpu();
- restore_fp(current);
- }
- err |= save_fp_context32(sc);
-
- preempt_enable();
+ err |= protected_save_fp_context32(sc);
}
return err;
}
+static int
+check_and_restore_fp_context32(struct sigcontext32 __user *sc)
+{
+ int err, sig;
+
+ err = sig = fpcsr_pending(&sc->sc_fpc_csr);
+ if (err > 0)
+ err = 0;
+ err |= protected_restore_fp_context32(sc);
+ return err ?: sig;
+}
+
static int restore_sigcontext32(struct pt_regs *regs,
struct sigcontext32 __user *sc)
{
err |= __get_user(used_math, &sc->sc_used_math);
conditional_used_math(used_math);
- preempt_disable();
-
- if (used_math()) {
+ if (used_math) {
/* restore fpu context if we have used it before */
- own_fpu();
- err |= restore_fp_context32(sc);
+ if (!err)
+ err = check_and_restore_fp_context32(sc);
} else {
/* signal handler may have used FPU. Give it up. */
- lose_fpu();
+ lose_fpu(0);
}
- preempt_enable();
-
return err;
}
{
struct sigframe32 __user *frame;
sigset_t blocked;
+ int sig;
frame = (struct sigframe32 __user *) regs.regs[29];
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
- if (restore_sigcontext32(®s, &frame->sf_sc))
+ sig = restore_sigcontext32(®s, &frame->sf_sc);
+ if (sig < 0)
goto badframe;
+ else if (sig)
+ force_sig(sig, current);
/*
* Don't let your children do this ...
sigset_t set;
stack_t st;
s32 sp;
+ int sig;
frame = (struct rt_sigframe32 __user *) regs.regs[29];
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
- if (restore_sigcontext32(®s, &frame->rs_uc.uc_mcontext))
+ sig = restore_sigcontext32(®s, &frame->rs_uc.uc_mcontext);
+ if (sig < 0)
goto badframe;
+ else if (sig)
+ force_sig(sig, current);
/* The ucontext contains a stack32_t, so we must convert! */
if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp))
sigset_t set;
stack_t st;
s32 sp;
+ int sig;
frame = (struct rt_sigframe_n32 __user *) regs.regs[29];
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
- if (restore_sigcontext(®s, &frame->rs_uc.uc_mcontext))
+ sig = restore_sigcontext(®s, &frame->rs_uc.uc_mcontext);
+ if (sig < 0)
goto badframe;
+ else if (sig)
+ force_sig(sig, current);
/* The ucontext contains a stack32_t, so we must convert! */
if (__get_user(sp, &frame->rs_uc.uc_stack.ss_sp))
#include <linux/sched.h>
#include <linux/cpumask.h>
#include <linux/interrupt.h>
+#include <linux/kernel_stat.h>
#include <linux/module.h>
#include <asm/cpu.h>
#include <asm/hazards.h>
#include <asm/mmu_context.h>
#include <asm/smp.h>
+#include <asm/mips-boards/maltaint.h>
#include <asm/mipsregs.h>
#include <asm/cacheflush.h>
#include <asm/time.h>
void ipi_decode(struct smtc_ipi *);
static void post_direct_ipi(int cpu, struct smtc_ipi *pipi);
-static void setup_cross_vpe_interrupts(void);
+static void setup_cross_vpe_interrupts(unsigned int nvpe);
void init_smtc_stats(void);
/* Global SMTC Status */
int imstuckcount[2][8];
/* vpemask represents IM/IE bits of per-VPE Status registers, low-to-high */
-int vpemask[2][8] = {{0,1,1,0,0,0,0,1},{0,1,0,0,0,0,0,1}};
+int vpemask[2][8] = {
+ {0, 0, 1, 0, 0, 0, 0, 1},
+ {0, 0, 0, 0, 0, 0, 0, 1}
+};
int tcnoprog[NR_CPUS];
static atomic_t idle_hook_initialized = {0};
static int clock_hang_reported[NR_CPUS];
/* If we have multiple VPEs running, set up the cross-VPE interrupt */
- if (nvpe > 1)
- setup_cross_vpe_interrupts();
+ setup_cross_vpe_interrupts(nvpe);
/* Set up queue of free IPI "messages". */
nipi = NR_CPUS * IPIBUF_PER_CPU;
int setup_irq_smtc(unsigned int irq, struct irqaction * new,
unsigned long hwmask)
{
+ unsigned int vpe = current_cpu_data.vpe_id;
+
irq_hwmask[irq] = hwmask;
+#ifdef CONFIG_SMTC_IDLE_HOOK_DEBUG
+ vpemask[vpe][irq - MIPSCPU_INT_BASE] = 1;
+#endif
return setup_irq(irq, new);
}
smtc_ipi_nq(&freeIPIq, pipi);
switch (type_copy) {
case SMTC_CLOCK_TICK:
+ irq_enter();
+ kstat_this_cpu.irqs[MIPSCPU_INT_BASE + MIPSCPU_INT_CPUCTR]++;
/* Invoke Clock "Interrupt" */
ipi_timer_latch[dest_copy] = 0;
#ifdef CONFIG_SMTC_IDLE_HOOK_DEBUG
clock_hang_reported[dest_copy] = 0;
#endif /* CONFIG_SMTC_IDLE_HOOK_DEBUG */
local_timer_interrupt(0, NULL);
+ irq_exit();
break;
case LINUX_SMP_IPI:
switch ((int)arg_copy) {
static struct irqaction irq_ipi;
-static void setup_cross_vpe_interrupts(void)
+static void setup_cross_vpe_interrupts(unsigned int nvpe)
{
+ if (nvpe < 1)
+ return;
+
if (!cpu_has_vint)
panic("SMTC Kernel requires Vectored Interupt support");
/*
* SMTC-specific hacks invoked from elsewhere in the kernel.
+ *
+ * smtc_ipi_replay is called from raw_local_irq_restore which is only ever
+ * called with interrupts disabled. We do rely on interrupts being disabled
+ * here because using spin_lock_irqsave()/spin_unlock_irqrestore() would
+ * result in a recursive call to raw_local_irq_restore().
*/
-void smtc_ipi_replay(void)
+static void __smtc_ipi_replay(void)
{
+ unsigned int cpu = smp_processor_id();
+
/*
* To the extent that we've ever turned interrupts off,
* we may have accumulated deferred IPIs. This is subtle.
* is clear, and we'll handle it as a real pseudo-interrupt
* and not a pseudo-pseudo interrupt.
*/
- if (IPIQ[smp_processor_id()].depth > 0) {
- struct smtc_ipi *pipi;
- extern void self_ipi(struct smtc_ipi *);
+ if (IPIQ[cpu].depth > 0) {
+ while (1) {
+ struct smtc_ipi_q *q = &IPIQ[cpu];
+ struct smtc_ipi *pipi;
+ extern void self_ipi(struct smtc_ipi *);
+
+ spin_lock(&q->lock);
+ pipi = __smtc_ipi_dq(q);
+ spin_unlock(&q->lock);
+ if (!pipi)
+ break;
- while ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()]))) {
self_ipi(pipi);
- smtc_cpu_stats[smp_processor_id()].selfipis++;
+ smtc_cpu_stats[cpu].selfipis++;
}
}
}
+void smtc_ipi_replay(void)
+{
+ raw_local_irq_disable();
+ __smtc_ipi_replay();
+}
+
EXPORT_SYMBOL(smtc_ipi_replay);
void smtc_idle_loop_hook(void)
* is in use, there should never be any.
*/
#ifndef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY
- smtc_ipi_replay();
+ {
+ unsigned long flags;
+
+ local_irq_save(flags);
+ __smtc_ipi_replay();
+ local_irq_restore(flags);
+ }
#endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */
}
if (fcr31 & FPU_CSR_UNI_X) {
int sig;
- preempt_disable();
-
-#ifdef CONFIG_PREEMPT
- if (!is_fpu_owner()) {
- /* We might lose fpu before disabling preempt... */
- own_fpu();
- BUG_ON(!used_math());
- restore_fp(current);
- }
-#endif
/*
* Unimplemented operation exception. If we've got the full
* software emulator on-board, let's use it...
* register operands before invoking the emulator, which seems
* a bit extreme for what should be an infrequent event.
*/
- save_fp(current);
/* Ensure 'resume' not overwrite saved fp context again. */
- lose_fpu();
-
- preempt_enable();
+ lose_fpu(1);
/* Run the emulator */
sig = fpu_emulator_cop1Handler (regs, ¤t->thread.fpu, 1);
- preempt_disable();
-
- own_fpu(); /* Using the FPU again. */
/*
* We can't allow the emulated instruction to leave any of
* the cause bit set in $fcr31.
current->thread.fpu.fcr31 &= ~FPU_CSR_ALL_X;
/* Restore the hardware register state */
- restore_fp(current);
-
- preempt_enable();
+ own_fpu(1); /* Using the FPU again. */
/* If something went wrong, signal */
if (sig)
unsigned int opcode, bcode;
siginfo_t info;
- if (get_user(opcode, (unsigned int __user *) exception_epc(regs)))
+ if (__get_user(opcode, (unsigned int __user *) exception_epc(regs)))
goto out_sigsegv;
/*
unsigned int opcode, tcode = 0;
siginfo_t info;
- if (get_user(opcode, (unsigned int __user *) exception_epc(regs)))
+ if (__get_user(opcode, (unsigned int __user *) exception_epc(regs)))
goto out_sigsegv;
/* Immediate versions don't provide a code. */
break;
case 1:
- preempt_disable();
-
- own_fpu();
- if (used_math()) { /* Using the FPU again. */
- restore_fp(current);
- } else { /* First time FPU user. */
+ if (used_math()) /* Using the FPU again. */
+ own_fpu(1);
+ else { /* First time FPU user. */
init_fpu();
set_used_math();
}
- if (cpu_has_fpu) {
- preempt_enable();
- } else {
+ if (!raw_cpu_has_fpu) {
int sig;
- preempt_enable();
sig = fpu_emulator_cop1Handler(regs,
¤t->thread.fpu, 0);
if (sig)
case 2:
case 3:
- die_if_kernel("do_cpu invoked from kernel context!", regs);
break;
}
/*
* This is used by native signal handling
*/
-asmlinkage int (*save_fp_context)(struct sigcontext *sc);
-asmlinkage int (*restore_fp_context)(struct sigcontext *sc);
+asmlinkage int (*save_fp_context)(struct sigcontext __user *sc);
+asmlinkage int (*restore_fp_context)(struct sigcontext __user *sc);
-extern asmlinkage int _save_fp_context(struct sigcontext *sc);
-extern asmlinkage int _restore_fp_context(struct sigcontext *sc);
+extern asmlinkage int _save_fp_context(struct sigcontext __user *sc);
+extern asmlinkage int _restore_fp_context(struct sigcontext __user *sc);
-extern asmlinkage int fpu_emulator_save_context(struct sigcontext *sc);
-extern asmlinkage int fpu_emulator_restore_context(struct sigcontext *sc);
+extern asmlinkage int fpu_emulator_save_context(struct sigcontext __user *sc);
+extern asmlinkage int fpu_emulator_restore_context(struct sigcontext __user *sc);
#ifdef CONFIG_SMP
-static int smp_save_fp_context(struct sigcontext *sc)
+static int smp_save_fp_context(struct sigcontext __user *sc)
{
- return cpu_has_fpu
+ return raw_cpu_has_fpu
? _save_fp_context(sc)
: fpu_emulator_save_context(sc);
}
-static int smp_restore_fp_context(struct sigcontext *sc)
+static int smp_restore_fp_context(struct sigcontext __user *sc)
{
- return cpu_has_fpu
+ return raw_cpu_has_fpu
? _restore_fp_context(sc)
: fpu_emulator_restore_context(sc);
}
/*
* This is used by 32-bit signal stuff on the 64-bit kernel
*/
-asmlinkage int (*save_fp_context32)(struct sigcontext32 *sc);
-asmlinkage int (*restore_fp_context32)(struct sigcontext32 *sc);
+asmlinkage int (*save_fp_context32)(struct sigcontext32 __user *sc);
+asmlinkage int (*restore_fp_context32)(struct sigcontext32 __user *sc);
-extern asmlinkage int _save_fp_context32(struct sigcontext32 *sc);
-extern asmlinkage int _restore_fp_context32(struct sigcontext32 *sc);
+extern asmlinkage int _save_fp_context32(struct sigcontext32 __user *sc);
+extern asmlinkage int _restore_fp_context32(struct sigcontext32 __user *sc);
-extern asmlinkage int fpu_emulator_save_context32(struct sigcontext32 *sc);
-extern asmlinkage int fpu_emulator_restore_context32(struct sigcontext32 *sc);
+extern asmlinkage int fpu_emulator_save_context32(struct sigcontext32 __user *sc);
+extern asmlinkage int fpu_emulator_restore_context32(struct sigcontext32 __user *sc);
static inline void signal32_init(void)
{
* with appropriate macros from uaccess.h
*/
-int fpu_emulator_save_context(struct sigcontext *sc)
+int fpu_emulator_save_context(struct sigcontext __user *sc)
{
int i;
int err = 0;
return err;
}
-int fpu_emulator_restore_context(struct sigcontext *sc)
+int fpu_emulator_restore_context(struct sigcontext __user *sc)
{
int i;
int err = 0;
* This is the o32 version
*/
-int fpu_emulator_save_context32(struct sigcontext32 *sc)
+int fpu_emulator_save_context32(struct sigcontext32 __user *sc)
{
int i;
int err = 0;
return err;
}
-int fpu_emulator_restore_context32(struct sigcontext32 *sc)
+int fpu_emulator_restore_context32(struct sigcontext32 __user *sc)
{
int i;
int err = 0;
char parity = '\0', bits = '\0', flow = '\0';
char *s;
- if ((strstr(prom_getcmdline(), "console=ttyS")) == NULL) {
+ if ((strstr(prom_getcmdline(), "console=")) == NULL) {
s = prom_getenv("modetty0");
if (s) {
while (*s >= '0' && *s <= '9')
{
}
-static void local_r3k_flush_data_cache_page(unsigned long addr)
+static void local_r3k_flush_data_cache_page(void *addr)
{
}
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 1994 - 2003 by Ralf Baechle
+ * Copyright (C) 1994 - 2003, 07 by Ralf Baechle (ralf@linux-mips.org)
+ * Copyright (C) 2007 MIPS Technologies, Inc.
*/
#include <linux/init.h>
#include <linux/kernel.h>
EXPORT_SYMBOL(__flush_dcache_page);
+void __flush_anon_page(struct page *page, unsigned long vmaddr)
+{
+ if (pages_do_alias((unsigned long)page_address(page), vmaddr)) {
+ void *kaddr;
+
+ kaddr = kmap_coherent(page, vmaddr);
+ flush_data_cache_page((unsigned long)kaddr);
+ kunmap_coherent(kaddr);
+ }
+}
+
+EXPORT_SYMBOL(__flush_anon_page);
+
void __update_cache(struct vm_area_struct *vma, unsigned long address,
pte_t pte)
{
asmlinkage void sb1_cache_error(void)
{
- uint64_t cerr_dpa;
uint32_t errctl, cerr_i, cerr_d, dpalo, dpahi, eepc, res;
+ unsigned long long cerr_dpa;
#ifdef CONFIG_SIBYTE_BW_TRACE
/* Freeze the trace buffer now */
{
unsigned short way;
int valid;
- uint64_t taglo, va, tlo_tmp;
uint32_t taghi, taglolo, taglohi;
+ unsigned long long taglo, va;
+ uint64_t tlo_tmp;
uint8_t lru;
int res = 0;
{
int valid, way;
unsigned char state;
- uint64_t taglo, pa;
uint32_t taghi, taglolo, taglohi;
+ unsigned long long taglo, pa;
uint8_t ecc, lru;
int res = 0;
}
if (data) {
- uint64_t datalo;
uint32_t datalohi, datalolo, datahi;
+ unsigned long long datalo;
int offset;
char bad_ecc = 0;
#include <dma-coherence.h>
+static inline unsigned long dma_addr_to_virt(dma_addr_t dma_addr)
+{
+ unsigned long addr = plat_dma_addr_to_phys(dma_addr);
+
+ return (unsigned long)phys_to_virt(addr);
+}
+
/*
* Warning on the terminology - Linux calls an uncached area coherent;
* MIPS terminology calls memory areas with hardware maintained coherency
enum dma_data_direction direction)
{
if (cpu_is_noncoherent_r10000(dev))
- __dma_sync(plat_dma_addr_to_phys(dma_addr) + PAGE_OFFSET, size,
+ __dma_sync(dma_addr_to_virt(dma_addr), size,
direction);
plat_unmap_dma_mem(dma_addr);
if (cpu_is_noncoherent_r10000(dev)) {
unsigned long addr;
- addr = PAGE_OFFSET + plat_dma_addr_to_phys(dma_handle);
+ addr = dma_addr_to_virt(dma_handle);
__dma_sync(addr, size, direction);
}
}
if (!plat_device_is_coherent(dev)) {
unsigned long addr;
- addr = PAGE_OFFSET + plat_dma_addr_to_phys(dma_handle);
+ addr = dma_addr_to_virt(dma_handle);
__dma_sync(addr, size, direction);
}
}
if (cpu_is_noncoherent_r10000(dev)) {
unsigned long addr;
- addr = PAGE_OFFSET + plat_dma_addr_to_phys(dma_handle);
+ addr = dma_addr_to_virt(dma_handle);
__dma_sync(addr + offset, size, direction);
}
}
if (!plat_device_is_coherent(dev)) {
unsigned long addr;
- addr = PAGE_OFFSET + plat_dma_addr_to_phys(dma_handle);
+ addr = dma_addr_to_virt(dma_handle);
__dma_sync(addr + offset, size, direction);
}
}
siginfo_t info;
#if 0
- printk("Cpu%d[%s:%d:%0*lx:%ld:%0*lx]\n", smp_processor_id(),
+ printk("Cpu%d[%s:%d:%0*lx:%ld:%0*lx]\n", raw_smp_processor_id(),
current->comm, current->pid, field, address, write,
field, regs->cp0_epc);
#endif
printk(KERN_ALERT "CPU %d Unable to handle kernel paging request at "
"virtual address %0*lx, epc == %0*lx, ra == %0*lx\n",
- smp_processor_id(), field, address, field, regs->cp0_epc,
+ raw_smp_processor_id(), field, address, field, regs->cp0_epc,
field, regs->regs[31]);
die("Oops", regs);
pmd_t *pmd, *pmd_k;
pte_t *pte_k;
- pgd = (pgd_t *) pgd_current[smp_processor_id()] + offset;
+ pgd = (pgd_t *) pgd_current[raw_smp_processor_id()] + offset;
pgd_k = init_mm.pgd + offset;
if (!pgd_present(*pgd_k))
static inline void kmap_coherent_init(void) {}
#endif
-static inline void *kmap_coherent(struct page *page, unsigned long addr)
+void *kmap_coherent(struct page *page, unsigned long addr)
{
enum fixed_addresses idx;
unsigned long vaddr, flags, entrylo;
#define UNIQUE_ENTRYHI(idx) (CKSEG0 + ((idx) << (PAGE_SHIFT + 1)))
-static inline void kunmap_coherent(struct page *page)
+void kunmap_coherent(struct page *page)
{
#ifndef CONFIG_MIPS_MT_SMTC
unsigned int wired;
#ifdef CONFIG_FLATMEM
free_area_init(zones_size);
#else
- pfn = 0;
+ pfn = min_low_pfn;
for (i = 0; i < MAX_NR_ZONES; i++)
for (j = 0; j < zones_size[i]; j++, pfn++)
if (!page_is_ram(pfn))
for (i = 0; i < DM_NUM_CHANNELS; i++) {
const u64 base_val = CPHYSADDR(&page_descr[i]) |
V_DM_DSCR_BASE_RINGSZ(1);
- volatile void *base_reg =
- IOADDR(A_DM_REGISTER(i, R_DM_DSCR_BASE));
+ void *base_reg = IOADDR(A_DM_REGISTER(i, R_DM_DSCR_BASE));
__raw_writeq(base_val, base_reg);
__raw_writeq(base_val | M_DM_DSCR_BASE_RESET, base_reg);
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_cause() & read_c0_status();
+ unsigned int pending = read_c0_cause() & read_c0_status() & ST0_IM;
if (pending & STATUSF_IP0)
do_IRQ(0);
#define vpe_id() smp_processor_id()
#else
#define WHAT 0
-#define vpe_id() smp_processor_id()
+#define vpe_id() 0
#endif
#define __define_perf_accessors(r, n, np) \
__define_perf_accessors(perfcntr, 0, 2)
__define_perf_accessors(perfcntr, 1, 3)
-__define_perf_accessors(perfcntr, 2, 2)
-__define_perf_accessors(perfcntr, 3, 2)
+__define_perf_accessors(perfcntr, 2, 0)
+__define_perf_accessors(perfcntr, 3, 1)
__define_perf_accessors(perfctrl, 0, 2)
__define_perf_accessors(perfctrl, 1, 3)
-__define_perf_accessors(perfctrl, 2, 2)
-__define_perf_accessors(perfctrl, 3, 2)
+__define_perf_accessors(perfctrl, 2, 0)
+__define_perf_accessors(perfctrl, 3, 1)
struct op_mips_model op_model_mipsxx_ops;
int i;
/* Compute the performance counter control word. */
- /* For now count kernel and user mode */
for (i = 0; i < counters; i++) {
reg.control[i] = 0;
reg.counter[i] = 0;
counters = __n_counters();
}
-#ifdef CONFIG_MIPS_MT_SMP
- counters >> 1;
-#endif
return counters;
}
reset_counters(counters);
+#ifdef CONFIG_MIPS_MT_SMP
+ counters >>= 1;
+#endif
+
op_model_mipsxx_ops.num_counters = counters;
switch (current_cpu_data.cputype) {
case CPU_20KC:
static void mipsxx_exit(void)
{
- reset_counters(op_model_mipsxx_ops.num_counters);
+ int counters = op_model_mipsxx_ops.num_counters;
+#ifdef CONFIG_MIPS_MT_SMP
+ counters <<= 1;
+#endif
+ reset_counters(counters);
perf_irq = null_perf_irq;
}
/*
* See if the PCI bus has been configured by the firmware.
*/
- reg = *((volatile uint64_t *) IOADDR(A_SCD_SYSTEM_CFG));
+ reg = __raw_readq(IOADDR(A_SCD_SYSTEM_CFG));
if (!(reg & M_BCM1480_SYS_PCI_HOST)) {
bcm1480_bus_status |= PCI_DEVICE_MODE;
} else {
#include <linux/pci.h>
+#include <asm/irq.h>
int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
{
/*
* See if the PCI bus has been configured by the firmware.
*/
- reg = *((volatile uint64_t *) IOADDR(A_SCD_SYSTEM_CFG));
+ reg = __raw_readq(IOADDR(A_SCD_SYSTEM_CFG));
if (!(reg & M_SYS_PCI_HOST)) {
sb1250_bus_status |= PCI_DEVICE_MODE;
} else {
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_status() & read_c0_cause();
+ unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
if (pending & STATUSF_IP2)
hw0_irqdispatch(2);
else if (pending & STATUSF_IP7) {
if (read_c0_config7() & 0x01c0)
timer_irqdispatch(7);
- }
-
- spurious_interrupt();
+ } else
+ spurious_interrupt();
}
static inline void modify_cp0_intmask(unsigned clr_mask, unsigned set_mask)
* Note, PCI INTA is active low on the bus, but inverted
* in the GIC, so to us it's active high.
*/
-#ifdef CONFIG_PNX8550_V2PCI
- if (gic_int_line == (PNX8550_INT_GPIO0 - PNX8550_INT_GIC_MIN)) {
- /* PCI INT through gpio 8, which is setup in
- * pnx8550_setup.c and routed to GPIO
- * Interrupt Level 0 (GPIO Connection 58).
- * Set it active low. */
-
- PNX8550_GIC_REQ(gic_int_line) = 0x1E020000;
- } else
-#endif
- {
- PNX8550_GIC_REQ(i - PNX8550_INT_GIC_MIN) = 0x1E000000;
- }
+ PNX8550_GIC_REQ(i - PNX8550_INT_GIC_MIN) = 0x1E000000;
/* mask/priority is still 0 so we will not get any
* interrupts until it is unmasked */
void prom_boot_secondary(int cpu, struct task_struct *idle)
{
}
+
+void __init plat_smp_setup(void)
+{
+}
+void __init plat_prepare_cpus(unsigned int max_cpus)
+{
+}
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_cause();
+ unsigned int pending = read_c0_status() & read_c0_cause();
/*
* First we check for r4k counter/timer IRQ.
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_cause();
+ unsigned int pending = read_c0_status() & read_c0_cause();
if (likely(pending & IE_IRQ0))
ip32_irq0();
unsigned long flags;
unsigned int irq_dirty;
- i = first_cpu(mask);
- if (next_cpu(i, mask) <= NR_CPUS) {
+ if (cpus_weight(mask) != 1) {
printk("attempted to set irq affinity for irq %d to multiple CPUs\n", irq);
return;
}
+ i = first_cpu(mask);
/* Convert logical CPU to physical CPU */
cpu = cpu_logical_map(i);
* independent of board/firmware
*/
-static volatile void *mailbox_0_set_regs[] = {
+static void *mailbox_0_set_regs[] = {
IOADDR(A_BCM1480_IMR_CPU0_BASE + R_BCM1480_IMR_MAILBOX_0_SET_CPU),
IOADDR(A_BCM1480_IMR_CPU1_BASE + R_BCM1480_IMR_MAILBOX_0_SET_CPU),
IOADDR(A_BCM1480_IMR_CPU2_BASE + R_BCM1480_IMR_MAILBOX_0_SET_CPU),
IOADDR(A_BCM1480_IMR_CPU3_BASE + R_BCM1480_IMR_MAILBOX_0_SET_CPU),
};
-static volatile void *mailbox_0_clear_regs[] = {
+static void *mailbox_0_clear_regs[] = {
IOADDR(A_BCM1480_IMR_CPU0_BASE + R_BCM1480_IMR_MAILBOX_0_CLR_CPU),
IOADDR(A_BCM1480_IMR_CPU1_BASE + R_BCM1480_IMR_MAILBOX_0_CLR_CPU),
IOADDR(A_BCM1480_IMR_CPU2_BASE + R_BCM1480_IMR_MAILBOX_0_CLR_CPU),
IOADDR(A_BCM1480_IMR_CPU3_BASE + R_BCM1480_IMR_MAILBOX_0_CLR_CPU),
};
-static volatile void *mailbox_0_regs[] = {
+static void *mailbox_0_regs[] = {
IOADDR(A_BCM1480_IMR_CPU0_BASE + R_BCM1480_IMR_MAILBOX_0_CPU),
IOADDR(A_BCM1480_IMR_CPU1_BASE + R_BCM1480_IMR_MAILBOX_0_CPU),
IOADDR(A_BCM1480_IMR_CPU2_BASE + R_BCM1480_IMR_MAILBOX_0_CPU),
* blasting the high 32 bits.
*/
- pending = read_c0_cause() & read_c0_status();
+ pending = read_c0_cause() & read_c0_status() & ST0_IM;
#ifdef CONFIG_SIBYTE_SB1250_PROF
if (pending & CAUSEF_IP7) /* Cpu performance counter interrupt */
periph_rev = 3;
pass_str = "A2";
break;
+ case K_SYS_REVISION_BCM112x_A3:
+ periph_rev = 3;
+ pass_str = "A3";
+ break;
+ case K_SYS_REVISION_BCM112x_A4:
+ periph_rev = 3;
+ pass_str = "A4";
+ break;
+ case K_SYS_REVISION_BCM112x_B0:
+ periph_rev = 3;
+ pass_str = "B0";
+ break;
default:
printk("Unknown %s rev %x\n", soc_str, soc_pass);
ret = 1;
#define LEDS_PHYS MLEDS_PHYS
#endif
-#define setled(index, c) \
- ((unsigned char *)(IOADDR(LEDS_PHYS)+0x20))[(3-(index))<<3] = (c)
void setleds(char *str)
{
+ void *reg;
int i;
+
for (i = 0; i < 4; i++) {
- if (!str[i]) {
- setled(i, ' ');
- } else {
- setled(i, str[i]);
- }
+ reg = IOADDR(LEDS_PHYS) + 0x20 + ((3 - i) << 3);
+
+ if (!str[i])
+ writeb(' ', reg);
+ else
+ writeb(str[i], reg);
}
}
-#endif
+
+#endif /* LEDS_PHYS */
static void sni_pcimt_hwint(void)
{
- u32 pending = (read_c0_cause() & read_c0_status());
+ u32 pending = read_c0_cause() & read_c0_status();
if (pending & C_IRQ5)
do_IRQ (MIPS_CPU_IRQ_BASE + 7);
static void sni_pcit_hwint(void)
{
- u32 pending = (read_c0_cause() & read_c0_status());
+ u32 pending = read_c0_cause() & read_c0_status();
if (pending & C_IRQ1)
pcit_hwint1();
static void sni_pcit_hwint_cplus(void)
{
- u32 pending = (read_c0_cause() & read_c0_status());
+ u32 pending = read_c0_cause() & read_c0_status();
if (pending & C_IRQ0)
pcit_hwint0();
asmlinkage void plat_irq_dispatch(void)
{
- unsigned int pending = read_c0_status() & read_c0_cause();
+ unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
if (pending & STATUSF_IP7) /* cpu timer */
do_IRQ(TX4927_IRQ_CPU_TIMER);
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.20-rc5
-# Mon Jan 22 22:12:56 2007
+# Linux kernel version: 2.6.21-rc3
+# Fri Mar 9 23:34:53 2007
#
CONFIG_PPC64=y
CONFIG_64BIT=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
# CONFIG_IPC_NS is not set
+CONFIG_SYSVIPC_SYSCTL=y
# CONFIG_POSIX_MQUEUE is not set
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set
CONFIG_CPUSETS=y
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_RELAY is not set
+CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
# CONFIG_PPC_PSERIES is not set
# CONFIG_PPC_ISERIES is not set
# CONFIG_PPC_MPC52xx is not set
+# CONFIG_PPC_MPC5200 is not set
# CONFIG_PPC_PMAC is not set
# CONFIG_PPC_MAPLE is not set
# CONFIG_PPC_PASEMI is not set
CONFIG_PPC_CELL_NATIVE=y
CONFIG_PPC_IBM_CELL_BLADE=y
CONFIG_PPC_PS3=y
+CONFIG_PPC_CELLEB=y
CONFIG_PPC_NATIVE=y
CONFIG_UDBG_RTAS_CONSOLE=y
+CONFIG_PPC_UDBG_BEAT=y
# CONFIG_U3_DART is not set
CONFIG_PPC_RTAS=y
# CONFIG_RTAS_ERROR_LOGGING is not set
#
# PS3 Platform Options
#
+# CONFIG_PS3_ADVANCED is not set
CONFIG_PS3_HTAB_SIZE=20
# CONFIG_PS3_DYNAMIC_DMA is not set
CONFIG_PS3_USE_LPAR_ADDR=y
CONFIG_PS3_VUART=y
+CONFIG_PS3_PS3AV=y
+CONFIG_PS3_SYS_MANAGER=y
#
# Kernel options
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_MIGRATION=y
CONFIG_RESOURCES_64BIT=y
+CONFIG_ZONE_DMA_FLAG=1
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_NODES_SPAN_OTHER_NODES=y
CONFIG_PPC_64K_PAGES=y
#
# Bus options
#
+CONFIG_ZONE_DMA=y
CONFIG_GENERIC_ISA_DMA=y
# CONFIG_MPIC_WEIRD is not set
# CONFIG_PPC_I8259 is not set
CONFIG_XFRM=y
# CONFIG_XFRM_USER is not set
# CONFIG_XFRM_SUB_POLICY is not set
+# CONFIG_XFRM_MIGRATE is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_ULOG=m
-CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_TOS=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_DEBUG_DEVRES is not set
# CONFIG_SYS_HYPERVISOR is not set
#
#
# Plug and Play support
#
+# CONFIG_PNPACPI is not set
#
# Block devices
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=131072
CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024
-CONFIG_BLK_DEV_INITRD=y
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_BLK_DEV_JMICRON is not set
# CONFIG_BLK_DEV_SC1200 is not set
# CONFIG_BLK_DEV_PIIX is not set
+# CONFIG_BLK_DEV_IT8213 is not set
# CONFIG_BLK_DEV_IT821X is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
+# CONFIG_BLK_DEV_TC86C001 is not set
+CONFIG_BLK_DEV_IDE_CELLEB=y
# CONFIG_IDE_ARM is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
-CONFIG_SCSI=m
+CONFIG_SCSI=y
# CONFIG_SCSI_TGT is not set
# CONFIG_SCSI_NETLINK is not set
CONFIG_SCSI_PROC_FS=y
#
# SCSI support type (disk, tape, CD-ROM)
#
-CONFIG_BLK_DEV_SD=m
+CONFIG_BLK_DEV_SD=y
# CONFIG_CHR_DEV_ST is not set
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
# CONFIG_BLK_DEV_SR_VENDOR is not set
-CONFIG_CHR_DEV_SG=m
+CONFIG_CHR_DEV_SG=y
# CONFIG_CHR_DEV_SCH is not set
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
# CONFIG_SCSI_ISCSI_ATTRS is not set
-# CONFIG_SCSI_SAS_ATTRS is not set
+CONFIG_SCSI_SAS_ATTRS=y
# CONFIG_SCSI_SAS_LIBSAS is not set
#
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
+# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_FC is not set
# CONFIG_SCSI_QLA_ISCSI is not set
#
# Serial ATA (prod) and Parallel ATA (experimental) drivers
#
-# CONFIG_ATA is not set
+CONFIG_ATA=y
+# CONFIG_ATA_NONSTANDARD is not set
+# CONFIG_SATA_AHCI is not set
+# CONFIG_SATA_SVW is not set
+# CONFIG_ATA_PIIX is not set
+# CONFIG_SATA_MV is not set
+# CONFIG_SATA_NV is not set
+# CONFIG_PDC_ADMA is not set
+# CONFIG_SATA_QSTOR is not set
+CONFIG_SATA_PROMISE=y
+# CONFIG_SATA_SX4 is not set
+# CONFIG_SATA_SIL is not set
+# CONFIG_SATA_SIL24 is not set
+# CONFIG_SATA_SIS is not set
+# CONFIG_SATA_ULI is not set
+# CONFIG_SATA_VIA is not set
+# CONFIG_SATA_VITESSE is not set
+# CONFIG_SATA_INIC162X is not set
+# CONFIG_PATA_ALI is not set
+# CONFIG_PATA_AMD is not set
+# CONFIG_PATA_ARTOP is not set
+# CONFIG_PATA_ATIIXP is not set
+# CONFIG_PATA_CMD64X is not set
+# CONFIG_PATA_CS5520 is not set
+# CONFIG_PATA_CS5530 is not set
+# CONFIG_PATA_CYPRESS is not set
+# CONFIG_PATA_EFAR is not set
+# CONFIG_ATA_GENERIC is not set
+# CONFIG_PATA_HPT366 is not set
+# CONFIG_PATA_HPT37X is not set
+# CONFIG_PATA_HPT3X2N is not set
+# CONFIG_PATA_HPT3X3 is not set
+# CONFIG_PATA_IT821X is not set
+# CONFIG_PATA_IT8213 is not set
+# CONFIG_PATA_JMICRON is not set
+# CONFIG_PATA_TRIFLEX is not set
+# CONFIG_PATA_MARVELL is not set
+# CONFIG_PATA_MPIIX is not set
+# CONFIG_PATA_OLDPIIX is not set
+# CONFIG_PATA_NETCELL is not set
+# CONFIG_PATA_NS87410 is not set
+# CONFIG_PATA_OPTI is not set
+# CONFIG_PATA_OPTIDMA is not set
+# CONFIG_PATA_PDC_OLD is not set
+# CONFIG_PATA_RADISYS is not set
+# CONFIG_PATA_RZ1000 is not set
+# CONFIG_PATA_SC1200 is not set
+# CONFIG_PATA_SERVERWORKS is not set
+CONFIG_PATA_PDC2027X=m
+# CONFIG_PATA_SIL680 is not set
+# CONFIG_PATA_SIS is not set
+# CONFIG_PATA_VIA is not set
+# CONFIG_PATA_WINBOND is not set
+# CONFIG_PATA_SCC is not set
#
# Multi-device support (RAID and LVM)
#
# Fusion MPT device support
#
-# CONFIG_FUSION is not set
+CONFIG_FUSION=y
# CONFIG_FUSION_SPI is not set
# CONFIG_FUSION_FC is not set
-# CONFIG_FUSION_SAS is not set
+CONFIG_FUSION_SAS=y
+CONFIG_FUSION_MAX_SGE=128
+# CONFIG_FUSION_CTL is not set
#
# IEEE 1394 (FireWire) support
# CONFIG_BNX2 is not set
CONFIG_SPIDER_NET=y
# CONFIG_QLA3XXX is not set
+# CONFIG_ATL1 is not set
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
+# CONFIG_CHELSIO_T3 is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
# CONFIG_MYRI10GE is not set
# CONFIG_NETXEN_NIC is not set
+# CONFIG_PASEMI_MAC is not set
#
# Token Ring devices
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_SERIAL_TXX9=y
+CONFIG_HAS_TXX9_SERIAL=y
+CONFIG_SERIAL_TXX9_NR_UARTS=2
+CONFIG_SERIAL_TXX9_CONSOLE=y
# CONFIG_SERIAL_JSM is not set
+CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_RTAS=y
+# CONFIG_HVC_BEAT is not set
#
# IPMI
#
-# CONFIG_IPMI_HANDLER is not set
+CONFIG_IPMI_HANDLER=m
+# CONFIG_IPMI_PANIC_EVENT is not set
+CONFIG_IPMI_DEVICE_INTERFACE=m
+CONFIG_IPMI_SI=m
+CONFIG_IPMI_WATCHDOG=m
+CONFIG_IPMI_POWEROFF=m
#
# Watchdog Cards
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
-CONFIG_WATCHDOG_RTAS=y
+# CONFIG_WATCHDOG_RTAS is not set
#
# PCI-based Watchdog Cards
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PASEMI is not set
# CONFIG_I2C_PROSAVAGE is not set
# CONFIG_I2C_SAVAGE4 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
+#
+# Multifunction device drivers
+#
+# CONFIG_MFD_SM501 is not set
+
#
# Multimedia devices
#
#
# Graphics support
#
-CONFIG_FIRMWARE_EDID=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
# CONFIG_FB is not set
# CONFIG_FB_IBM_GXT4500 is not set
#
# CONFIG_VGA_CONSOLE is not set
CONFIG_DUMMY_CONSOLE=y
-# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
# HID Devices
#
CONFIG_HID=m
+# CONFIG_HID_DEBUG is not set
#
# USB support
# Miscellaneous USB options
#
CONFIG_USB_DEVICEFS=y
-# CONFIG_USB_BANDWIDTH is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
-# CONFIG_USB_MULTITHREAD_PROBE is not set
# CONFIG_USB_OTG is not set
#
# CONFIG_USB_EHCI_SPLIT_ISO is not set
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
+CONFIG_USB_EHCI_BIG_ENDIAN_MMIO=y
# CONFIG_USB_ISP116X_HCD is not set
CONFIG_USB_OHCI_HCD=m
-# CONFIG_USB_OHCI_BIG_ENDIAN is not set
+CONFIG_USB_OHCI_HCD_PPC_OF=y
+CONFIG_USB_OHCI_HCD_PPC_OF_BE=y
+# CONFIG_USB_OHCI_HCD_PPC_OF_LE is not set
+CONFIG_USB_OHCI_HCD_PCI=y
+CONFIG_USB_OHCI_BIG_ENDIAN_DESC=y
+CONFIG_USB_OHCI_BIG_ENDIAN_MMIO=y
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
# CONFIG_USB_UHCI_HCD is not set
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_ATI_REMOTE2 is not set
# CONFIG_USB_KEYSPAN_REMOTE is not set
# CONFIG_USB_APPLETOUCH is not set
+# CONFIG_USB_GTCO is not set
#
# USB Imaging devices
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
+# CONFIG_USB_BERRY_CHARGE is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
+# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
#
CONFIG_INFINIBAND_MTHCA_DEBUG=y
# CONFIG_INFINIBAND_AMSO1100 is not set
CONFIG_INFINIBAND_IPOIB=m
+# CONFIG_INFINIBAND_IPOIB_CM is not set
CONFIG_INFINIBAND_IPOIB_DEBUG=y
CONFIG_INFINIBAND_IPOIB_DEBUG_DATA=y
# CONFIG_INFINIBAND_SRP is not set
# DMA Devices
#
+#
+# Auxiliary Display support
+#
+
#
# Virtualization
#
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_PLIST=y
-CONFIG_IOMAP_COPY=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT=y
#
# Instrumentation Support
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SHIRQ is not set
CONFIG_LOG_BUF_SHIFT=15
-CONFIG_DETECT_SOFTLOCKUP=y
+# CONFIG_DETECT_SOFTLOCKUP is not set
# CONFIG_SCHEDSTATS is not set
+# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
CONFIG_DEBUG_MUTEXES=y
-# CONFIG_DEBUG_RWSEMS is not set
CONFIG_DEBUG_SPINLOCK_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_FORCED_INLINING is not set
# CONFIG_RCU_TORTURE_TEST is not set
+# CONFIG_FAULT_INJECTION is not set
# CONFIG_DEBUG_STACKOVERFLOW is not set
# CONFIG_DEBUG_STACK_USAGE is not set
CONFIG_DEBUGGER=y
# CONFIG_CRYPTO_GF128MUL is not set
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_CBC=m
+CONFIG_CRYPTO_PCBC=m
# CONFIG_CRYPTO_LRW is not set
CONFIG_CRYPTO_DES=m
+# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_SERPENT is not set
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_CRC32C is not set
+# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_TEST is not set
#
#ifdef CONFIG_PPC64
struct thread_info *t = current_thread_info();
- if (t->flags & _TIF_ABI_PENDING)
- t->flags ^= (_TIF_ABI_PENDING | _TIF_32BIT);
+ if (test_ti_thread_flag(t, TIF_ABI_PENDING)) {
+ clear_ti_thread_flag(t, TIF_ABI_PENDING);
+ if (test_ti_thread_flag(t, TIF_32BIT))
+ clear_ti_thread_flag(t, TIF_32BIT);
+ else
+ set_ti_thread_flag(t, TIF_32BIT);
+ }
#endif
discard_lazy_cpu_state();
void udbg_init_pas_realmode(void)
{
- udbg_comport = (volatile struct NS16550 __iomem *)0xfcff03f8;
+ udbg_comport = (volatile struct NS16550 __iomem *)0xfcff03f8UL;
udbg_putc = udbg_pas_real_putc;
udbg_getc = NULL;
"non-cacheable mapping\n");
psize = mmu_vmalloc_psize = MMU_PAGE_4K;
}
+#ifdef CONFIG_SPE_BASE
+ spu_flush_all_slbs(mm);
+#endif
}
if (user_region) {
if (psize != get_paca()->context.user_psize) {
mmu_psize_defs[MMU_PAGE_4K].sllp;
get_paca()->context = mm->context;
slb_flush_and_rebolt();
+#ifdef CONFIG_SPE_BASE
+ spu_flush_all_slbs(mm);
+#endif
}
}
if (mm->context.user_psize == MMU_PAGE_64K)
#include <asm/machdep.h>
#include <asm/cputable.h>
#include <asm/tlb.h>
+#include <asm/spu.h>
#include <linux/sysctl.h>
if ((addr + len) > 0x100000000UL)
err = open_high_hpage_areas(current->mm,
HTLB_AREA_MASK(addr, len));
+#ifdef CONFIG_SPE_BASE
+ spu_flush_all_slbs(current->mm);
+#endif
if (err) {
printk(KERN_DEBUG "prepare_hugepage_range(%lx, %lx)"
" failed (lowmask: 0x%04hx, highmask: 0x%04hx)\n",
pr_debug("%s: irq=%x. l2=%d\n", __func__, irq, l2irq);
- io_be_setbit(&intr->main_mask, 15 - l2irq);
+ io_be_setbit(&intr->main_mask, 16 - l2irq);
}
static void mpc52xx_main_unmask(unsigned int virq)
pr_debug("%s: irq=%x. l2=%d\n", __func__, irq, l2irq);
- io_be_clrbit(&intr->main_mask, 15 - l2irq);
+ io_be_clrbit(&intr->main_mask, 16 - l2irq);
}
static struct irq_chip mpc52xx_main_irqchip = {
const struct spu_management_ops *spu_management_ops;
const struct spu_priv1_ops *spu_priv1_ops;
+static struct list_head spu_list[MAX_NUMNODES];
+static LIST_HEAD(spu_full_list);
+static DEFINE_MUTEX(spu_mutex);
+static spinlock_t spu_list_lock = SPIN_LOCK_UNLOCKED;
+
EXPORT_SYMBOL_GPL(spu_priv1_ops);
+void spu_invalidate_slbs(struct spu *spu)
+{
+ struct spu_priv2 __iomem *priv2 = spu->priv2;
+
+ if (spu_mfc_sr1_get(spu) & MFC_STATE1_RELOCATE_MASK)
+ out_be64(&priv2->slb_invalidate_all_W, 0UL);
+}
+EXPORT_SYMBOL_GPL(spu_invalidate_slbs);
+
+/* This is called by the MM core when a segment size is changed, to
+ * request a flush of all the SPEs using a given mm
+ */
+void spu_flush_all_slbs(struct mm_struct *mm)
+{
+ struct spu *spu;
+ unsigned long flags;
+
+ spin_lock_irqsave(&spu_list_lock, flags);
+ list_for_each_entry(spu, &spu_full_list, full_list) {
+ if (spu->mm == mm)
+ spu_invalidate_slbs(spu);
+ }
+ spin_unlock_irqrestore(&spu_list_lock, flags);
+}
+
+/* The hack below stinks... try to do something better one of
+ * these days... Does it even work properly with NR_CPUS == 1 ?
+ */
+static inline void mm_needs_global_tlbie(struct mm_struct *mm)
+{
+ int nr = (NR_CPUS > 1) ? NR_CPUS : NR_CPUS + 1;
+
+ /* Global TLBIE broadcast required with SPEs. */
+ __cpus_setall(&mm->cpu_vm_mask, nr);
+}
+
+void spu_associate_mm(struct spu *spu, struct mm_struct *mm)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&spu_list_lock, flags);
+ spu->mm = mm;
+ spin_unlock_irqrestore(&spu_list_lock, flags);
+ if (mm)
+ mm_needs_global_tlbie(mm);
+}
+EXPORT_SYMBOL_GPL(spu_associate_mm);
+
static int __spu_trap_invalid_dma(struct spu *spu)
{
pr_debug("%s\n", __FUNCTION__);
struct spu_priv2 __iomem *priv2 = spu->priv2;
struct mm_struct *mm = spu->mm;
u64 esid, vsid, llp;
+ int psize;
pr_debug("%s\n", __FUNCTION__);
case USER_REGION_ID:
#ifdef CONFIG_HUGETLB_PAGE
if (in_hugepage_area(mm->context, ea))
- llp = mmu_psize_defs[mmu_huge_psize].sllp;
+ psize = mmu_huge_psize;
else
#endif
- llp = mmu_psize_defs[mmu_virtual_psize].sllp;
+ psize = mm->context.user_psize;
vsid = (get_vsid(mm->context.id, ea) << SLB_VSID_SHIFT) |
- SLB_VSID_USER | llp;
+ SLB_VSID_USER;
break;
case VMALLOC_REGION_ID:
- llp = mmu_psize_defs[mmu_virtual_psize].sllp;
+ if (ea < VMALLOC_END)
+ psize = mmu_vmalloc_psize;
+ else
+ psize = mmu_io_psize;
vsid = (get_kernel_vsid(ea) << SLB_VSID_SHIFT) |
- SLB_VSID_KERNEL | llp;
+ SLB_VSID_KERNEL;
break;
case KERNEL_REGION_ID:
- llp = mmu_psize_defs[mmu_linear_psize].sllp;
+ psize = mmu_linear_psize;
vsid = (get_kernel_vsid(ea) << SLB_VSID_SHIFT) |
- SLB_VSID_KERNEL | llp;
+ SLB_VSID_KERNEL;
break;
default:
/* Future: support kernel segments so that drivers
pr_debug("invalid region access at %016lx\n", ea);
return 1;
}
+ llp = mmu_psize_defs[psize].sllp;
out_be64(&priv2->slb_index_W, spu->slb_replace);
- out_be64(&priv2->slb_vsid_RW, vsid);
+ out_be64(&priv2->slb_vsid_RW, vsid | llp);
out_be64(&priv2->slb_esid_RW, esid);
spu->slb_replace++;
free_irq(spu->irqs[2], spu);
}
-static struct list_head spu_list[MAX_NUMNODES];
-static LIST_HEAD(spu_full_list);
-static DEFINE_MUTEX(spu_mutex);
-
static void spu_init_channels(struct spu *spu)
{
static const struct {
struct spu *spu;
int ret;
static int number;
+ unsigned long flags;
ret = -ENOMEM;
spu = kzalloc(sizeof (*spu), GFP_KERNEL);
goto out_free_irqs;
mutex_lock(&spu_mutex);
+ spin_lock_irqsave(&spu_list_lock, flags);
list_add(&spu->list, &spu_list[spu->node]);
list_add(&spu->full_list, &spu_full_list);
+ spin_unlock_irqrestore(&spu_list_lock, flags);
mutex_unlock(&spu_mutex);
goto out;
spufs_mem_read(struct file *file, char __user *buffer,
size_t size, loff_t *pos)
{
- int ret;
struct spu_context *ctx = file->private_data;
+ ssize_t ret;
spu_acquire(ctx);
ret = __spufs_mem_read(ctx, buffer, size, pos);
static ssize_t
spufs_mem_write(struct file *file, const char __user *buffer,
- size_t size, loff_t *pos)
+ size_t size, loff_t *ppos)
{
struct spu_context *ctx = file->private_data;
char *local_store;
+ loff_t pos = *ppos;
int ret;
- size = min_t(ssize_t, LS_SIZE - *pos, size);
- if (size <= 0)
+ if (pos < 0)
+ return -EINVAL;
+ if (pos > LS_SIZE)
return -EFBIG;
- *pos += size;
+ if (size > LS_SIZE - pos)
+ size = LS_SIZE - pos;
spu_acquire(ctx);
-
local_store = ctx->ops->get_ls(ctx);
- ret = copy_from_user(local_store + *pos - size,
- buffer, size) ? -EFAULT : size;
-
+ ret = copy_from_user(local_store + pos, buffer, size);
spu_release(ctx);
- return ret;
+
+ if (ret)
+ return -EFAULT;
+ *ppos = pos + size;
+ return size;
}
static unsigned long spufs_mem_mmap_nopfn(struct vm_area_struct *vma,
int ret;
unsigned long runcntl = SPU_RUNCNTL_RUNNABLE;
- ret = spu_acquire_runnable(ctx, SPU_ACTIVATE_NOWAKE);
+ ret = spu_acquire_runnable(ctx, 0);
if (ret)
return ret;
spu_release(ctx);
ret = spu_setup_isolated(ctx);
if (!ret)
- ret = spu_acquire_runnable(ctx, SPU_ACTIVATE_NOWAKE);
+ ret = spu_acquire_runnable(ctx, 0);
}
/* if userspace has set the runcntrl register (eg, to issue an
mutex_unlock(&spu_prio->active_mutex[node]);
}
-static inline void mm_needs_global_tlbie(struct mm_struct *mm)
-{
- int nr = (NR_CPUS > 1) ? NR_CPUS : NR_CPUS + 1;
-
- /* Global TLBIE broadcast required with SPEs. */
- __cpus_setall(&mm->cpu_vm_mask, nr);
-}
-
static BLOCKING_NOTIFIER_HEAD(spu_switch_notifier);
static void spu_switch_notify(struct spu *spu, struct spu_context *ctx)
ctx->spu = spu;
ctx->ops = &spu_hw_ops;
spu->pid = current->pid;
- spu->mm = ctx->owner;
- mm_needs_global_tlbie(spu->mm);
+ spu_associate_mm(spu, ctx->owner);
spu->ibox_callback = spufs_ibox_callback;
spu->wbox_callback = spufs_wbox_callback;
spu->stop_callback = spufs_stop_callback;
spu->stop_callback = NULL;
spu->mfc_callback = NULL;
spu->dma_callback = NULL;
- spu->mm = NULL;
+ spu_associate_mm(spu, NULL);
spu->pid = 0;
ctx->ops = &spu_backing_ops;
ctx->spu = NULL;
{
DEFINE_WAIT(wait);
- set_bit(SPU_SCHED_WAKE, &ctx->sched_flags);
prepare_to_wait_exclusive(&ctx->stop_wq, &wait, TASK_INTERRUPTIBLE);
if (!signal_pending(current)) {
mutex_unlock(&ctx->state_mutex);
}
__set_current_state(TASK_RUNNING);
remove_wait_queue(&ctx->stop_wq, &wait);
- clear_bit(SPU_SCHED_WAKE, &ctx->sched_flags);
}
/**
best = sched_find_first_bit(spu_prio->bitmap);
if (best < MAX_PRIO) {
struct spu_context *ctx = spu_grab_context(best);
- if (ctx && test_bit(SPU_SCHED_WAKE, &ctx->sched_flags))
+ if (ctx)
wake_up(&ctx->stop_wq);
}
spin_unlock(&spu_prio->runq_lock);
}
spu_add_to_rq(ctx);
- if (!(flags & SPU_ACTIVATE_NOWAKE))
- spu_prio_wait(ctx);
+ spu_prio_wait(ctx);
spu_del_from_rq(ctx);
} while (!signal_pending(current));
/* ctx->sched_flags */
enum {
- SPU_SCHED_WAKE = 0,
+ SPU_SCHED_WAKE = 0, /* currently unused */
};
struct spu_context {
int spu_acquire_runnable(struct spu_context *ctx, unsigned long flags);
void spu_acquire_saved(struct spu_context *ctx);
int spu_acquire_exclusive(struct spu_context *ctx);
-enum {
- SPU_ACTIVATE_NOWAKE = 1,
-};
+
int spu_activate(struct spu_context *ctx, unsigned long flags);
void spu_deactivate(struct spu_context *ctx);
void spu_yield(struct spu_context *ctx);
MFC_CNTL_PURGE_DMA_COMPLETE);
}
-static inline void save_mfc_slbs(struct spu_state *csa, struct spu *spu)
-{
- struct spu_priv2 __iomem *priv2 = spu->priv2;
- int i;
-
- /* Save, Step 29:
- * If MFC_SR1[R]='1', save SLBs in CSA.
- */
- if (spu_mfc_sr1_get(spu) & MFC_STATE1_RELOCATE_MASK) {
- csa->priv2.slb_index_W = in_be64(&priv2->slb_index_W);
- for (i = 0; i < 8; i++) {
- out_be64(&priv2->slb_index_W, i);
- eieio();
- csa->slb_esid_RW[i] = in_be64(&priv2->slb_esid_RW);
- csa->slb_vsid_RW[i] = in_be64(&priv2->slb_vsid_RW);
- eieio();
- }
- }
-}
-
static inline void setup_mfc_sr1(struct spu_state *csa, struct spu *spu)
{
/* Save, Step 30:
out_be64(&priv2->mfc_control_RW, MFC_CNTL_RESUME_DMA_QUEUE);
}
-static inline void invalidate_slbs(struct spu_state *csa, struct spu *spu)
-{
- struct spu_priv2 __iomem *priv2 = spu->priv2;
-
- /* Save, Step 45:
- * Restore, Step 19:
- * If MFC_SR1[R]=1, write 0 to SLB_Invalidate_All.
- */
- if (spu_mfc_sr1_get(spu) & MFC_STATE1_RELOCATE_MASK) {
- out_be64(&priv2->slb_invalidate_all_W, 0UL);
- eieio();
- }
-}
-
static inline void get_kernel_slb(u64 ea, u64 slb[2])
{
u64 llp;
* MFC_SR1[R]=1 (in other words, assume that
* translation is desired by OS environment).
*/
- invalidate_slbs(csa, spu);
+ spu_invalidate_slbs(spu);
get_kernel_slb((unsigned long)&spu_save_code[0], code_slb);
get_kernel_slb((unsigned long)csa->lscsa, lscsa_slb);
load_mfc_slb(spu, code_slb, 0);
}
}
-static inline void restore_mfc_slbs(struct spu_state *csa, struct spu *spu)
-{
- struct spu_priv2 __iomem *priv2 = spu->priv2;
- int i;
-
- /* Restore, Step 68:
- * If MFC_SR1[R]='1', restore SLBs from CSA.
- */
- if (csa->priv1.mfc_sr1_RW & MFC_STATE1_RELOCATE_MASK) {
- for (i = 0; i < 8; i++) {
- out_be64(&priv2->slb_index_W, i);
- eieio();
- out_be64(&priv2->slb_esid_RW, csa->slb_esid_RW[i]);
- out_be64(&priv2->slb_vsid_RW, csa->slb_vsid_RW[i]);
- eieio();
- }
- out_be64(&priv2->slb_index_W, csa->priv2.slb_index_W);
- eieio();
- }
-}
-
static inline void restore_mfc_sr1(struct spu_state *csa, struct spu *spu)
{
/* Restore, Step 69:
set_mfc_tclass_id(prev, spu); /* Step 26. */
purge_mfc_queue(prev, spu); /* Step 27. */
wait_purge_complete(prev, spu); /* Step 28. */
- save_mfc_slbs(prev, spu); /* Step 29. */
setup_mfc_sr1(prev, spu); /* Step 30. */
save_spu_npc(prev, spu); /* Step 31. */
save_spu_privcntl(prev, spu); /* Step 32. */
reset_spu_privcntl(prev, spu); /* Step 16. */
reset_spu_lslr(prev, spu); /* Step 17. */
setup_mfc_sr1(prev, spu); /* Step 18. */
- invalidate_slbs(prev, spu); /* Step 19. */
+ spu_invalidate_slbs(spu); /* Step 19. */
reset_ch_part1(prev, spu); /* Step 20. */
reset_ch_part2(prev, spu); /* Step 21. */
enable_interrupts(prev, spu); /* Step 22. */
restore_spu_mb(next, spu); /* Step 65. */
check_ppu_mb_stat(next, spu); /* Step 66. */
check_ppuint_mb_stat(next, spu); /* Step 67. */
- restore_mfc_slbs(next, spu); /* Step 68. */
+ spu_invalidate_slbs(spu); /* Modified Step 68. */
restore_mfc_sr1(next, spu); /* Step 69. */
restore_other_spu_access(next, spu); /* Step 70. */
restore_spu_runcntl(next, spu); /* Step 71. */
#define IOBMAP_L2E_V 0x80000000
#define IOBMAP_L2E_V_CACHED 0xc0000000
-static u32 *iob;
+static u32 __iomem *iob;
static u32 iob_l1_emptyval;
static u32 iob_l2_emptyval;
static u32 *iob_l2_base;
unsigned long nr_pages;
if (!firmware_has_feature(FW_FEATURE_PS3_LV1))
- return 0;
+ return -ENODEV;
BUG_ON(!mem_init_done);
int result;
if (!firmware_has_feature(FW_FEATURE_PS3_LV1))
- return 0;
+ return -ENODEV;
result = bus_register(&ps3_system_bus_type);
BUG_ON(result);
/*
* postcall is performed immediately before function return which
- * allows liberal use of volatile registers.
+ * allows liberal use of volatile registers. We branch around this
+ * in early init (eg when populating the MMU hashtable) by using an
+ * unconditional cpu feature.
*/
#define HCALL_INST_POSTCALL \
+BEGIN_FTR_SECTION; \
+ b 1f; \
+END_FTR_SECTION(0, 1); \
ld r4,STK_PARM(r3)(r1); /* validate opcode */ \
cmpldi cr7,r4,MAX_HCALL_OPCODE; \
bgt- cr7,1f; \
blr /* return r3 = status */
+/*
+ * plpar_hcall_raw can be called in real mode. kexec/kdump need some
+ * hypervisor calls to be executed in real mode. So plpar_hcall_raw
+ * does not access the per cpu hypervisor call statistics variables,
+ * since these variables may not be present in the RMO region.
+ */
+_GLOBAL(plpar_hcall_raw)
+ HMT_MEDIUM
+
+ mfcr r0
+ stw r0,8(r1)
+
+ std r4,STK_PARM(r4)(r1) /* Save ret buffer */
+
+ mr r4,r5
+ mr r5,r6
+ mr r6,r7
+ mr r7,r8
+ mr r8,r9
+ mr r9,r10
+
+ HVSC /* invoke the hypervisor */
+
+ ld r12,STK_PARM(r4)(r1)
+ std r4, 0(r12)
+ std r5, 8(r12)
+ std r6, 16(r12)
+ std r7, 24(r12)
+
+ lwz r0,8(r1)
+ mtcrf 0xff,r0
+
+ blr /* return r3 = status */
+
_GLOBAL(plpar_hcall9)
HMT_MEDIUM
/* TODO: Use bulk call */
for (i = 0; i < hpte_count; i++)
- plpar_pte_remove(0, i, 0, &dummy1, &dummy2);
+ plpar_pte_remove_raw(0, i, 0, &dummy1, &dummy2);
}
/*
return rc;
}
+/* plpar_pte_remove_raw can be called in real mode. It calls plpar_hcall_raw */
+static inline long plpar_pte_remove_raw(unsigned long flags, unsigned long ptex,
+ unsigned long avpn, unsigned long *old_pteh_ret,
+ unsigned long *old_ptel_ret)
+{
+ long rc;
+ unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+
+ rc = plpar_hcall_raw(H_REMOVE, retbuf, flags, ptex, avpn);
+
+ *old_pteh_ret = retbuf[0];
+ *old_ptel_ret = retbuf[1];
+
+ return rc;
+}
+
static inline long plpar_pte_read(unsigned long flags, unsigned long ptex,
unsigned long *old_pteh_ret, unsigned long *old_ptel_ret)
{
if (h.token == NULL)
return;
- h.token -= dcr_n * h.stride;
+ h.token += dcr_n * h.stride;
iounmap(h.token);
h.token = NULL;
}
/* allocate 2 internal temporary buffers (512 bytes size each) for
* the SDMA */
- sdma_buf_offset = qe_muram_alloc(512 * 2, 64);
+ sdma_buf_offset = qe_muram_alloc(512 * 2, 4096);
if (IS_MURAM_ERR(sdma_buf_offset))
return -ENOMEM;
out_be32(&sdma->sdebcr, sdma_buf_offset & QE_SDEBCR_BA_MASK);
- out_be32(&sdma->sdmr, (QE_SDMR_GLB_1_MSK | (0x1 >>
- QE_SDMR_CEN_SHIFT)));
+ out_be32(&sdma->sdmr, (QE_SDMR_GLB_1_MSK |
+ (0x1 << QE_SDMR_CEN_SHIFT)));
return 0;
}
#include <asm/tlbflush.h>
#include <asm/rheap.h>
+#define immr_map(member) \
+({ \
+ u32 offset = offsetof(immap_t, member); \
+ void *addr = ioremap (IMAP_ADDR + offset, \
+ sizeof( ((immap_t*)0)->member)); \
+ addr; \
+})
+
+#define immr_map_size(member, size) \
+({ \
+ u32 offset = offsetof(immap_t, member); \
+ void *addr = ioremap (IMAP_ADDR + offset, size); \
+ addr; \
+})
+
static void m8xx_cpm_dpinit(void);
static uint host_buffer; /* One page of host buffer */
static uint host_end; /* end + 1 */
static rh_info_t cpm_dpmem_info;
#define CPM_DPMEM_ALIGNMENT 8
+static u8* dpram_vbase;
+static uint dpram_pbase;
void m8xx_cpm_dpinit(void)
{
spin_lock_init(&cpm_dpmem_lock);
+ dpram_vbase = immr_map_size(im_cpm.cp_dpmem, CPM_DATAONLY_BASE + CPM_DATAONLY_SIZE);
+ dpram_pbase = (uint)&((immap_t *)IMAP_ADDR)->im_cpm.cp_dpmem;
+
/* Initialize the info header */
rh_init(&cpm_dpmem_info, CPM_DPMEM_ALIGNMENT,
sizeof(cpm_boot_dpmem_rh_block) /
return ((immap_t *)IMAP_ADDR)->im_cpm.cp_dpmem + offset;
}
EXPORT_SYMBOL(cpm_dpram_addr);
+
+uint cpm_dpram_phys(u8* addr)
+{
+ return (dpram_pbase + (uint)(addr - dpram_vbase));
+}
+EXPORT_SYMBOL(cpm_dpram_phys);
#
# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.21-rc5
+# Wed Apr 4 20:55:16 2007
#
CONFIG_MMU=y
+CONFIG_GENERIC_HARDIRQS=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
-CONFIG_HAVE_DEC_LOCK=y
+CONFIG_ARCH_HAS_ILOG2_U32=y
+# CONFIG_ARCH_HAS_ILOG2_U64 is not set
+CONFIG_GENERIC_HWEIGHT=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PPC=y
CONFIG_PPC32=y
CONFIG_GENERIC_NVRAM=y
+CONFIG_GENERIC_FIND_NEXT_BIT=y
+CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
+CONFIG_ARCH_MAY_HAVE_PC_FDC=y
+CONFIG_GENERIC_BUG=y
+CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
-CONFIG_CLEAN_COMPILE=y
-CONFIG_STANDALONE=y
CONFIG_BROKEN_ON_SMP=y
+CONFIG_INIT_ENV_ARG_LIMIT=32
#
# General setup
#
+CONFIG_LOCALVERSION=""
+CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
+# CONFIG_IPC_NS is not set
+CONFIG_SYSVIPC_SYSCTL=y
# CONFIG_POSIX_MQUEUE is not set
# CONFIG_BSD_PROCESS_ACCT is not set
-CONFIG_SYSCTL=y
+# CONFIG_TASKSTATS is not set
+# CONFIG_UTS_NS is not set
# CONFIG_AUDIT is not set
-CONFIG_LOG_BUF_SHIFT=14
-# CONFIG_HOTPLUG is not set
# CONFIG_IKCONFIG is not set
+CONFIG_SYSFS_DEPRECATED=y
+# CONFIG_RELAY is not set
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SYSCTL=y
CONFIG_EMBEDDED=y
+CONFIG_SYSCTL_SYSCALL=y
# CONFIG_KALLSYMS is not set
+# CONFIG_HOTPLUG is not set
+CONFIG_PRINTK=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
# CONFIG_EPOLL is not set
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_AS=y
-CONFIG_IOSCHED_DEADLINE=y
-CONFIG_IOSCHED_CFQ=y
-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SHMEM=y
+CONFIG_SLAB=y
+CONFIG_VM_EVENT_COUNTERS=y
+CONFIG_RT_MUTEXES=y
+# CONFIG_TINY_SHMEM is not set
+CONFIG_BASE_SMALL=0
+# CONFIG_SLOB is not set
#
# Loadable module support
#
# CONFIG_MODULES is not set
+#
+# Block layer
+#
+CONFIG_BLOCK=y
+# CONFIG_LBD is not set
+# CONFIG_BLK_DEV_IO_TRACE is not set
+# CONFIG_LSF is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_DEFAULT_AS is not set
+# CONFIG_DEFAULT_DEADLINE is not set
+CONFIG_DEFAULT_CFQ=y
+# CONFIG_DEFAULT_NOOP is not set
+CONFIG_DEFAULT_IOSCHED="cfq"
+
#
# Processor
#
CONFIG_6xx=y
# CONFIG_40x is not set
# CONFIG_44x is not set
-# CONFIG_POWER3 is not set
-# CONFIG_POWER4 is not set
# CONFIG_8xx is not set
+# CONFIG_E200 is not set
+# CONFIG_E500 is not set
+CONFIG_PPC_FPU=y
+# CONFIG_PPC_DCR_NATIVE is not set
+# CONFIG_KEXEC is not set
# CONFIG_CPU_FREQ is not set
+# CONFIG_WANT_EARLY_SERIAL is not set
CONFIG_EMBEDDEDBOOT=y
CONFIG_PPC_STD_MMU=y
#
# Platform options
#
-# CONFIG_PPC_MULTIPLATFORM is not set
+
+#
+# Freescale Ethernet driver platform-specific options
+#
+# CONFIG_PPC_PREP is not set
# CONFIG_APUS is not set
+# CONFIG_KATANA is not set
# CONFIG_WILLOW is not set
-# CONFIG_PCORE is not set
+# CONFIG_CPCI690 is not set
# CONFIG_POWERPMC250 is not set
-# CONFIG_EV64260 is not set
+# CONFIG_CHESTNUT is not set
# CONFIG_SPRUCE is not set
+# CONFIG_HDPU is not set
+# CONFIG_EV64260 is not set
# CONFIG_LOPEC is not set
-# CONFIG_MCPN765 is not set
# CONFIG_MVME5100 is not set
# CONFIG_PPLUS is not set
# CONFIG_PRPMC750 is not set
# CONFIG_PRPMC800 is not set
# CONFIG_SANDPOINT is not set
-# CONFIG_ADIR is not set
-# CONFIG_K2 is not set
+# CONFIG_RADSTONE_PPC7D is not set
# CONFIG_PAL4 is not set
-# CONFIG_GEMINI is not set
# CONFIG_EST8260 is not set
# CONFIG_SBC82xx is not set
# CONFIG_SBS8260 is not set
-# CONFIG_RPX6 is not set
+# CONFIG_RPX8260 is not set
# CONFIG_TQM8260 is not set
CONFIG_ADS8272=y
+# CONFIG_PQ2FADS is not set
+# CONFIG_LITE5200 is not set
+# CONFIG_MPC834x_SYS is not set
+# CONFIG_EV64360 is not set
CONFIG_PQ2ADS=y
CONFIG_8260=y
CONFIG_8272=y
CONFIG_CPM2=y
# CONFIG_PC_KEYBOARD is not set
-CONFIG_SERIAL_CONSOLE=y
# CONFIG_SMP is not set
-# CONFIG_PREEMPT is not set
# CONFIG_HIGHMEM is not set
-CONFIG_KERNEL_ELF=y
+CONFIG_ARCH_POPULATES_NODE_MAP=y
+# CONFIG_HZ_100 is not set
+CONFIG_HZ_250=y
+# CONFIG_HZ_300 is not set
+# CONFIG_HZ_1000 is not set
+CONFIG_HZ=250
+CONFIG_PREEMPT_NONE=y
+# CONFIG_PREEMPT_VOLUNTARY is not set
+# CONFIG_PREEMPT is not set
+CONFIG_SELECT_MEMORY_MODEL=y
+CONFIG_FLATMEM_MANUAL=y
+# CONFIG_DISCONTIGMEM_MANUAL is not set
+# CONFIG_SPARSEMEM_MANUAL is not set
+CONFIG_FLATMEM=y
+CONFIG_FLAT_NODE_MEM_MAP=y
+# CONFIG_SPARSEMEM_STATIC is not set
+CONFIG_SPLIT_PTLOCK_CPUS=4
+# CONFIG_RESOURCES_64BIT is not set
+CONFIG_ZONE_DMA_FLAG=1
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
# CONFIG_CMDLINE_BOOL is not set
+# CONFIG_PM is not set
+CONFIG_SECCOMP=y
+CONFIG_ISA_DMA_API=y
#
# Bus options
#
+CONFIG_ZONE_DMA=y
+# CONFIG_PPC_I8259 is not set
+CONFIG_PPC_INDIRECT_PCI=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
-# CONFIG_PCI_LEGACY_PROC is not set
-# CONFIG_PCI_NAMES is not set
+CONFIG_PCI_8260=y
+
+#
+# PCCARD (PCMCIA/CardBus) support
+#
#
# Advanced setup
CONFIG_TASK_SIZE=0x80000000
CONFIG_BOOT_LOAD=0x00400000
+#
+# Networking
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_NETDEBUG is not set
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+CONFIG_UNIX=y
+CONFIG_XFRM=y
+# CONFIG_XFRM_USER is not set
+# CONFIG_XFRM_SUB_POLICY is not set
+# CONFIG_XFRM_MIGRATE is not set
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_FIB_HASH=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_XFRM_TUNNEL is not set
+# CONFIG_INET_TUNNEL is not set
+CONFIG_INET_XFRM_MODE_TRANSPORT=y
+CONFIG_INET_XFRM_MODE_TUNNEL=y
+CONFIG_INET_XFRM_MODE_BEET=y
+CONFIG_INET_DIAG=y
+CONFIG_INET_TCP_DIAG=y
+# CONFIG_TCP_CONG_ADVANCED is not set
+CONFIG_TCP_CONG_CUBIC=y
+CONFIG_DEFAULT_TCP_CONG="cubic"
+# CONFIG_TCP_MD5SIG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_INET6_XFRM_TUNNEL is not set
+# CONFIG_INET6_TUNNEL is not set
+# CONFIG_NETWORK_SECMARK is not set
+# CONFIG_NETFILTER is not set
+
+#
+# DCCP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_DCCP is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+
+#
+# TIPC Configuration (EXPERIMENTAL)
+#
+# CONFIG_TIPC is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+# CONFIG_IEEE80211 is not set
+
#
# Device Drivers
#
#
# Generic Driver Options
#
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_SYS_HYPERVISOR is not set
+
+#
+# Connector - unified userspace <-> kernelspace linker
+#
+# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
# Plug and Play support
#
+# CONFIG_PNPACPI is not set
#
# Block devices
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
-# CONFIG_BLK_DEV_CARMEL is not set
+# CONFIG_BLK_DEV_SX8 is not set
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=32768
-CONFIG_BLK_DEV_INITRD=y
-# CONFIG_LBD is not set
+CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024
+# CONFIG_CDROM_PKTCDVD is not set
+# CONFIG_ATA_OVER_ETH is not set
+
+#
+# Misc devices
+#
+# CONFIG_SGI_IOC4 is not set
+# CONFIG_TIFM_CORE is not set
#
# ATA/ATAPI/MFM/RLL support
#
# SCSI device support
#
+# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
+# CONFIG_SCSI_NETLINK is not set
+
+#
+# Serial ATA (prod) and Parallel ATA (experimental) drivers
+#
+# CONFIG_ATA is not set
#
# Multi-device support (RAID and LVM)
#
# Fusion MPT device support
#
+# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
#
# Macintosh device drivers
#
+# CONFIG_MAC_EMUMOUSEBTN is not set
+# CONFIG_WINDFARM is not set
#
-# Networking support
-#
-CONFIG_NET=y
-
-#
-# Networking options
+# Network device support
#
-CONFIG_PACKET=y
-# CONFIG_PACKET_MMAP is not set
-# CONFIG_NETLINK_DEV is not set
-CONFIG_UNIX=y
-# CONFIG_NET_KEY is not set
-CONFIG_INET=y
-CONFIG_IP_MULTICAST=y
-# CONFIG_IP_ADVANCED_ROUTER is not set
-CONFIG_IP_PNP=y
-CONFIG_IP_PNP_DHCP=y
-CONFIG_IP_PNP_BOOTP=y
-# CONFIG_IP_PNP_RARP is not set
-# CONFIG_NET_IPIP is not set
-# CONFIG_NET_IPGRE is not set
-# CONFIG_IP_MROUTE is not set
-# CONFIG_ARPD is not set
-CONFIG_SYN_COOKIES=y
-# CONFIG_INET_AH is not set
-# CONFIG_INET_ESP is not set
-# CONFIG_INET_IPCOMP is not set
-# CONFIG_IPV6 is not set
-# CONFIG_NETFILTER is not set
-
-#
-# SCTP Configuration (EXPERIMENTAL)
-#
-# CONFIG_IP_SCTP is not set
-# CONFIG_ATM is not set
-# CONFIG_BRIDGE is not set
-# CONFIG_VLAN_8021Q is not set
-# CONFIG_DECNET is not set
-# CONFIG_LLC2 is not set
-# CONFIG_IPX is not set
-# CONFIG_ATALK is not set
-# CONFIG_X25 is not set
-# CONFIG_LAPB is not set
-# CONFIG_NET_DIVERT is not set
-# CONFIG_ECONET is not set
-# CONFIG_WAN_ROUTER is not set
-# CONFIG_NET_HW_FLOWCONTROL is not set
-
-#
-# QoS and/or fair queueing
-#
-# CONFIG_NET_SCHED is not set
-
-#
-# Network testing
-#
-# CONFIG_NET_PKTGEN is not set
-# CONFIG_NETPOLL is not set
-# CONFIG_NET_POLL_CONTROLLER is not set
-# CONFIG_HAMRADIO is not set
-# CONFIG_IRDA is not set
-# CONFIG_BT is not set
CONFIG_NETDEVICES=y
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
#
# CONFIG_ARCNET is not set
+#
+# PHY device support
+#
+CONFIG_PHYLIB=y
+
+#
+# MII PHY device drivers
+#
+# CONFIG_MARVELL_PHY is not set
+CONFIG_DAVICOM_PHY=y
+# CONFIG_QSEMI_PHY is not set
+# CONFIG_LXT_PHY is not set
+# CONFIG_CICADA_PHY is not set
+# CONFIG_VITESSE_PHY is not set
+# CONFIG_SMSC_PHY is not set
+# CONFIG_BROADCOM_PHY is not set
+# CONFIG_FIXED_PHY is not set
+
#
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
-# CONFIG_MII is not set
-# CONFIG_OAKNET is not set
+CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
+# CONFIG_CASSINI is not set
# CONFIG_NET_VENDOR_3COM is not set
#
# CONFIG_NET_TULIP is not set
# CONFIG_HP100 is not set
# CONFIG_NET_PCI is not set
+CONFIG_FS_ENET=y
+# CONFIG_FS_ENET_HAS_SCC is not set
+CONFIG_FS_ENET_HAS_FCC=y
#
# Ethernet (1000 Mbit)
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
+# CONFIG_SIS190 is not set
+# CONFIG_SKGE is not set
+# CONFIG_SKY2 is not set
# CONFIG_SK98LIN is not set
# CONFIG_TIGON3 is not set
+# CONFIG_BNX2 is not set
+# CONFIG_QLA3XXX is not set
+# CONFIG_ATL1 is not set
#
# Ethernet (10000 Mbit)
#
+# CONFIG_CHELSIO_T1 is not set
+# CONFIG_CHELSIO_T3 is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
+# CONFIG_MYRI10GE is not set
+# CONFIG_NETXEN_NIC is not set
#
# Token Ring devices
# CONFIG_SLIP is not set
# CONFIG_SHAPER is not set
# CONFIG_NETCONSOLE is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
# Input device support
#
CONFIG_INPUT=y
+# CONFIG_INPUT_FF_MEMLESS is not set
#
# Userland interfaces
# CONFIG_INPUT_EVDEV is not set
# CONFIG_INPUT_EVBUG is not set
-#
-# Input I/O drivers
-#
-# CONFIG_GAMEPORT is not set
-CONFIG_SOUND_GAMEPORT=y
-# CONFIG_SERIO is not set
-# CONFIG_SERIO_I8042 is not set
-
#
# Input Device Drivers
#
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
+#
+# Hardware I/O ports
+#
+# CONFIG_SERIO is not set
+# CONFIG_GAMEPORT is not set
+
#
# Character devices
#
#
# Non-8250 serial port support
#
+# CONFIG_SERIAL_UARTLITE is not set
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_SERIAL_CPM=y
+CONFIG_SERIAL_CPM_CONSOLE=y
+CONFIG_SERIAL_CPM_SCC1=y
+# CONFIG_SERIAL_CPM_SCC2 is not set
+# CONFIG_SERIAL_CPM_SCC3 is not set
+CONFIG_SERIAL_CPM_SCC4=y
+# CONFIG_SERIAL_CPM_SMC1 is not set
+# CONFIG_SERIAL_CPM_SMC2 is not set
+# CONFIG_SERIAL_JSM is not set
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
-# CONFIG_QIC02_TAPE is not set
#
# IPMI
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
+CONFIG_HW_RANDOM=y
# CONFIG_NVRAM is not set
CONFIG_GEN_RTC=y
# CONFIG_GEN_RTC_X is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set
-
-#
-# Ftape, the floppy tape device driver
-#
-# CONFIG_FTAPE is not set
# CONFIG_AGP is not set
# CONFIG_DRM is not set
# CONFIG_RAW_DRIVER is not set
+#
+# TPM devices
+#
+# CONFIG_TCG_TPM is not set
+
#
# I2C support
#
# CONFIG_I2C is not set
#
-# Misc devices
+# SPI support
+#
+# CONFIG_SPI is not set
+# CONFIG_SPI_MASTER is not set
+
#
+# Dallas's 1-wire bus
+#
+# CONFIG_W1 is not set
+
+#
+# Hardware Monitoring support
+#
+CONFIG_HWMON=y
+# CONFIG_HWMON_VID is not set
+# CONFIG_SENSORS_ABITUGURU is not set
+# CONFIG_SENSORS_F71805F is not set
+# CONFIG_SENSORS_PC87427 is not set
+# CONFIG_SENSORS_VT1211 is not set
+# CONFIG_HWMON_DEBUG_CHIP is not set
+
+#
+# Multifunction device drivers
+#
+# CONFIG_MFD_SM501 is not set
#
# Multimedia devices
#
# Graphics support
#
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
# CONFIG_FB is not set
+# CONFIG_FB_IBM_GXT4500 is not set
#
# Sound
#
# CONFIG_SOUND is not set
+#
+# HID Devices
+#
+CONFIG_HID=y
+# CONFIG_HID_DEBUG is not set
+
#
# USB support
#
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+CONFIG_USB_ARCH_HAS_EHCI=y
# CONFIG_USB is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support'
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# LED devices
+#
+# CONFIG_NEW_LEDS is not set
+
+#
+# LED drivers
+#
+
+#
+# LED Triggers
+#
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
+#
+# EDAC - error detection and reporting (RAS) (EXPERIMENTAL)
+#
+
+#
+# Real Time Clock
+#
+# CONFIG_RTC_CLASS is not set
+
+#
+# DMA Engine support
+#
+# CONFIG_DMA_ENGINE is not set
+
+#
+# DMA Clients
+#
+
+#
+# DMA Devices
+#
+
+#
+# Auxiliary Display support
+#
+
+#
+# Virtualization
+#
+
#
# File systems
#
CONFIG_EXT2_FS=y
# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT2_FS_XIP is not set
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_XATTR=y
# CONFIG_EXT3_FS_POSIX_ACL is not set
# CONFIG_EXT3_FS_SECURITY is not set
+# CONFIG_EXT4DEV_FS is not set
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
+CONFIG_FS_POSIX_ACL=y
# CONFIG_XFS_FS is not set
+# CONFIG_GFS2_FS is not set
+# CONFIG_OCFS2_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
+CONFIG_INOTIFY=y
+CONFIG_INOTIFY_USER=y
# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
+# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
#
# DOS/FAT/NT Filesystems
#
-# CONFIG_FAT_FS is not set
+# CONFIG_MSDOS_FS is not set
+# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set
#
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
+CONFIG_PROC_SYSCTL=y
CONFIG_SYSFS=y
-# CONFIG_DEVFS_FS is not set
-# CONFIG_DEVPTS_FS_XATTR is not set
CONFIG_TMPFS=y
+# CONFIG_TMPFS_POSIX_ACL is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
+# CONFIG_CONFIGFS_FS is not set
#
# Miscellaneous filesystems
# Network File Systems
#
CONFIG_NFS_FS=y
-# CONFIG_NFS_V3 is not set
-# CONFIG_NFS_V4 is not set
+CONFIG_NFS_V3=y
+CONFIG_NFS_V3_ACL=y
+CONFIG_NFS_V4=y
# CONFIG_NFS_DIRECTIO is not set
# CONFIG_NFSD is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
-# CONFIG_EXPORTFS is not set
+CONFIG_LOCKD_V4=y
+CONFIG_NFS_ACL_SUPPORT=y
+CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
-# CONFIG_RPCSEC_GSS_KRB5 is not set
+CONFIG_SUNRPC_GSS=y
+CONFIG_RPCSEC_GSS_KRB5=y
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
# CONFIG_SMB_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
+# CONFIG_9P_FS is not set
#
# Partition Types
# CONFIG_MAC_PARTITION is not set
# CONFIG_MSDOS_PARTITION is not set
# CONFIG_LDM_PARTITION is not set
-# CONFIG_NEC98_PARTITION is not set
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
+# CONFIG_KARMA_PARTITION is not set
# CONFIG_EFI_PARTITION is not set
#
# Native Language Support
#
# CONFIG_NLS is not set
+
+#
+# Distributed Lock Manager
+#
+# CONFIG_DLM is not set
# CONFIG_SCC_ENET is not set
-CONFIG_FEC_ENET=y
-# CONFIG_USE_MDIO is not set
+# CONFIG_FEC_ENET is not set
#
# CPM2 Options
#
-CONFIG_SCC_CONSOLE=y
-CONFIG_FCC1_ENET=y
-# CONFIG_FCC2_ENET is not set
-# CONFIG_FCC3_ENET is not set
#
# Library routines
#
+# CONFIG_CRC_CCITT is not set
+# CONFIG_CRC16 is not set
# CONFIG_CRC32 is not set
# CONFIG_LIBCRC32C is not set
+CONFIG_PLIST=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT=y
+# CONFIG_PROFILING is not set
#
# Kernel hacking
#
+# CONFIG_PRINTK_TIME is not set
+CONFIG_ENABLE_MUST_CHECK=y
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_UNUSED_SYMBOLS is not set
+# CONFIG_DEBUG_FS is not set
+# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_KERNEL is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_KGDB_CONSOLE is not set
#
# Security options
#
+# CONFIG_KEYS is not set
# CONFIG_SECURITY is not set
#
# Cryptographic options
#
-# CONFIG_CRYPTO is not set
+CONFIG_CRYPTO=y
+CONFIG_CRYPTO_ALGAPI=y
+CONFIG_CRYPTO_BLKCIPHER=y
+CONFIG_CRYPTO_MANAGER=y
+# CONFIG_CRYPTO_HMAC is not set
+# CONFIG_CRYPTO_XCBC is not set
+# CONFIG_CRYPTO_NULL is not set
+# CONFIG_CRYPTO_MD4 is not set
+CONFIG_CRYPTO_MD5=y
+# CONFIG_CRYPTO_SHA1 is not set
+# CONFIG_CRYPTO_SHA256 is not set
+# CONFIG_CRYPTO_SHA512 is not set
+# CONFIG_CRYPTO_WP512 is not set
+# CONFIG_CRYPTO_TGR192 is not set
+# CONFIG_CRYPTO_GF128MUL is not set
+CONFIG_CRYPTO_ECB=y
+CONFIG_CRYPTO_CBC=y
+CONFIG_CRYPTO_PCBC=y
+# CONFIG_CRYPTO_LRW is not set
+CONFIG_CRYPTO_DES=y
+# CONFIG_CRYPTO_FCRYPT is not set
+# CONFIG_CRYPTO_BLOWFISH is not set
+# CONFIG_CRYPTO_TWOFISH is not set
+# CONFIG_CRYPTO_SERPENT is not set
+# CONFIG_CRYPTO_AES is not set
+# CONFIG_CRYPTO_CAST5 is not set
+# CONFIG_CRYPTO_CAST6 is not set
+# CONFIG_CRYPTO_TEA is not set
+# CONFIG_CRYPTO_ARC4 is not set
+# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_ANUBIS is not set
+# CONFIG_CRYPTO_DEFLATE is not set
+# CONFIG_CRYPTO_MICHAEL_MIC is not set
+# CONFIG_CRYPTO_CRC32C is not set
+# CONFIG_CRYPTO_CAMELLIA is not set
+
+#
+# Hardware crypto devices
+#
#include <linux/ioport.h>
#include <linux/fs_enet_pd.h>
#include <linux/platform_device.h>
+#include <linux/phy.h>
#include <asm/io.h>
#include <asm/mpc8260.h>
#include "pq2ads_pd.h"
-static void init_fcc1_ioports(void);
-static void init_fcc2_ioports(void);
-static void init_scc1_uart_ioports(void);
-static void init_scc4_uart_ioports(void);
+static void init_fcc1_ioports(struct fs_platform_info*);
+static void init_fcc2_ioports(struct fs_platform_info*);
+static void init_scc1_uart_ioports(struct fs_uart_platform_info*);
+static void init_scc4_uart_ioports(struct fs_uart_platform_info*);
static struct fs_uart_platform_info mpc8272_uart_pdata[] = {
[fsid_scc1_uart] = {
},
};
-static void init_fcc1_ioports(struct fs_platform_info*)
+static void init_fcc1_ioports(struct fs_platform_info* pdata)
{
struct io_port *io;
u32 tempval;
iounmap(immap);
}
-static void init_fcc2_ioports(struct fs_platform_info*)
+static void init_fcc2_ioports(struct fs_platform_info* pdata)
{
cpm2_map_t* immap = ioremap(CPM_MAP_ADDR, sizeof(cpm2_map_t));
u32 *bcsr = ioremap(BCSR_ADDR+12, sizeof(u32));
}
}
-static void init_scc1_uart_ioports(struct fs_uart_platform_info*)
+static void init_scc1_uart_ioports(struct fs_uart_platform_info* pdata)
{
cpm2_map_t* immap = ioremap(CPM_MAP_ADDR, sizeof(cpm2_map_t));
iounmap(immap);
}
-static void init_scc4_uart_ioports(struct fs_uart_platform_info*)
+static void init_scc4_uart_ioports(struct fs_uart_platform_info* pdata)
{
cpm2_map_t* immap = ioremap(CPM_MAP_ADDR, sizeof(cpm2_map_t));
#include <linux/fs_enet_pd.h>
#include <linux/fs_uart_pd.h>
#include <linux/mii.h>
+#include <linux/phy.h>
#include <asm/delay.h>
#include <asm/io.h>
extern unsigned char __res[];
-static void setup_fec1_ioports(void);
-static void setup_scc1_ioports(void);
-static void setup_smc1_ioports(void);
-static void setup_smc2_ioports(void);
+static void setup_fec1_ioports(struct fs_platform_info*);
+static void setup_scc1_ioports(struct fs_platform_info*);
+static void setup_smc1_ioports(struct fs_uart_platform_info*);
+static void setup_smc2_ioports(struct fs_uart_platform_info*);
static struct fs_mii_fec_platform_info mpc8xx_mdio_fec_pdata;
iounmap(bcsr_io);
}
-static void setup_fec1_ioports(struct fs_platform_info*)
+static void setup_fec1_ioports(struct fs_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
setbits16(&immap->im_ioport.iop_pddir, 0x1fff);
}
-static void setup_scc1_ioports(struct fs_platform_info*)
+static void setup_scc1_ioports(struct fs_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
unsigned *bcsr_io;
}
-static void setup_smc1_ioports(struct fs_uart_platform_info*)
+static void setup_smc1_ioports(struct fs_uart_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
unsigned *bcsr_io;
}
-static void setup_smc2_ioports(struct fs_uart_platform_info*)
+static void setup_smc2_ioports(struct fs_uart_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
unsigned *bcsr_io;
#include <asm/ppc_sys.h>
extern unsigned char __res[];
-static void setup_smc1_ioports(void);
-static void setup_smc2_ioports(void);
+static void setup_smc1_ioports(struct fs_uart_platform_info*);
+static void setup_smc2_ioports(struct fs_uart_platform_info*);
static struct fs_mii_fec_platform_info mpc8xx_mdio_fec_pdata;
-static void setup_fec1_ioports(void);
-static void setup_fec2_ioports(void);
-static void setup_scc3_ioports(void);
+static void setup_fec1_ioports(struct fs_platform_info*);
+static void setup_fec2_ioports(struct fs_platform_info*);
+static void setup_scc3_ioports(struct fs_platform_info*);
static struct fs_uart_platform_info mpc885_uart_pdata[] = {
[fsid_smc1_uart] = {
#endif
}
-static void setup_fec1_ioports(struct fs_platform_info*)
+static void setup_fec1_ioports(struct fs_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
clrbits32(&immap->im_cpm.cp_cptr, 0x00000100);
}
-static void setup_fec2_ioports(struct fs_platform_info*)
+static void setup_fec2_ioports(struct fs_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
clrbits32(&immap->im_cpm.cp_cptr, 0x00000080);
}
-static void setup_scc3_ioports(struct fs_platform_info*)
+static void setup_scc3_ioports(struct fs_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
unsigned *bcsr_io;
mpc885ads_fixup_enet_pdata(pdev, fsid_scc1 + pdev->id - 1);
}
-static void setup_smc1_ioports(struct fs_uart_platform_info*)
+static void setup_smc1_ioports(struct fs_uart_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
unsigned *bcsr_io;
clrbits16(&immap->im_cpm.cp_pbodr, iobits);
}
-static void setup_smc2_ioports(struct fs_uart_platform_info*)
+static void setup_smc2_ioports(struct fs_uart_platform_info* pdata)
{
immap_t *immap = (immap_t *) IMAP_ADDR;
unsigned *bcsr_io;
mem_data->pgpgout = ev[PGPGOUT] >> 1;
mem_data->pswpin = ev[PSWPIN];
mem_data->pswpout = ev[PSWPOUT];
- mem_data->pgalloc = ev[PGALLOC_NORMAL] + ev[PGALLOC_DMA];
+ mem_data->pgalloc = ev[PGALLOC_NORMAL];
+#ifdef CONFIG_ZONE_DMA
+ mem_data->pgalloc += ev[PGALLOC_DMA];
+#endif
mem_data->pgfault = ev[PGFAULT];
mem_data->pgmajfault = ev[PGMAJFAULT];
llgtr %r3,%r3 # unsigned *
llgtr %r4,%r4 # struct getcpu_cache *
jg sys_getcpu
+
+ .globl compat_sys_epoll_pwait_wrapper
+compat_sys_epoll_pwait_wrapper:
+ lgfr %r2,%r2 # int
+ llgtr %r3,%r3 # struct compat_epoll_event *
+ lgfr %r4,%r4 # int
+ lgfr %r5,%r5 # int
+ llgtr %r6,%r6 # compat_sigset_t *
+ llgf %r0,164(%r15) # compat_size_t
+ stg %r0,160(%r15)
+ jg compat_sys_epoll_pwait
+
+ .globl compat_sys_utimes_wrapper
+compat_sys_utimes_wrapper:
+ llgtr %r2,%r2 # char *
+ llgtr %r3,%r3 # struct compat_timeval *
+ jg compat_sys_utimes
rc->level = level;
rc->buf_size = buf_size;
rc->entry_size = sizeof(debug_entry_t) + buf_size;
- strlcpy(rc->name, name, sizeof(rc->name)-1);
+ strlcpy(rc->name, name, sizeof(rc->name));
memset(rc->views, 0, DEBUG_MAX_VIEWS * sizeof(struct debug_view *));
memset(rc->debugfs_entries, 0 ,DEBUG_MAX_VIEWS *
sizeof(struct dentry*));
machine_flags |= 4;
}
+#ifdef CONFIG_64BIT
static noinline __init int memory_fast_detect(void)
{
-
unsigned long val0 = 0;
unsigned long val1 = 0xc;
int ret = -ENOSYS;
if (ret || val0 != val1)
return -ENOSYS;
- memory_chunk[0].size = val0;
+ memory_chunk[0].size = val0 + 1;
return 0;
}
+#else
+static inline int memory_fast_detect(void)
+{
+ return -ENOSYS;
+}
+#endif
#define ADDR2G (1UL << 31)
}
reipl_block_ccw->hdr.len = IPL_PARM_BLK_CCW_LEN;
reipl_block_ccw->hdr.version = IPL_PARM_BLOCK_VERSION;
- reipl_block_ccw->hdr.blk0_len = sizeof(reipl_block_ccw->ipl_info.ccw);
+ reipl_block_ccw->hdr.blk0_len = IPL_PARM_BLK0_CCW_LEN;
reipl_block_ccw->hdr.pbt = DIAG308_IPL_TYPE_CCW;
/* check if read scp info worked and set loadparm */
if (SCCB_VALID)
} else {
reipl_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN;
reipl_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION;
- reipl_block_fcp->hdr.blk0_len =
- sizeof(reipl_block_fcp->ipl_info.fcp);
+ reipl_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN;
reipl_block_fcp->hdr.pbt = DIAG308_IPL_TYPE_FCP;
reipl_block_fcp->ipl_info.fcp.opt = DIAG308_IPL_OPT_IPL;
}
}
dump_block_ccw->hdr.len = IPL_PARM_BLK_CCW_LEN;
dump_block_ccw->hdr.version = IPL_PARM_BLOCK_VERSION;
- dump_block_ccw->hdr.blk0_len = sizeof(reipl_block_ccw->ipl_info.ccw);
+ dump_block_ccw->hdr.blk0_len = IPL_PARM_BLK0_CCW_LEN;
dump_block_ccw->hdr.pbt = DIAG308_IPL_TYPE_CCW;
dump_capabilities |= IPL_TYPE_CCW;
return 0;
}
dump_block_fcp->hdr.len = IPL_PARM_BLK_FCP_LEN;
dump_block_fcp->hdr.version = IPL_PARM_BLOCK_VERSION;
- dump_block_fcp->hdr.blk0_len = sizeof(dump_block_fcp->ipl_info.fcp);
+ dump_block_fcp->hdr.blk0_len = IPL_PARM_BLK0_FCP_LEN;
dump_block_fcp->hdr.pbt = DIAG308_IPL_TYPE_FCP;
dump_block_fcp->ipl_info.fcp.opt = DIAG308_IPL_OPT_DUMP;
dump_capabilities |= IPL_TYPE_FCP;
* shall not cross any page boundaries (vmalloc area!) when writing
* the new instruction.
*/
- addr = (u32 *)ALIGN((unsigned long)args->ptr, 4);
+ addr = (u32 *)((unsigned long)args->ptr & -4UL);
if ((unsigned long)args->ptr & 2)
instr = ((*addr) & 0xffff0000) | args->new;
else
SYSCALL(sys_vmsplice,sys_vmsplice,compat_sys_vmsplice_wrapper)
NI_SYSCALL /* 310 sys_move_pages */
SYSCALL(sys_getcpu,sys_getcpu,sys_getcpu_wrapper)
-SYSCALL(sys_epoll_pwait,sys_epoll_pwait,sys_ni_syscall)
+SYSCALL(sys_epoll_pwait,sys_epoll_pwait,compat_sys_epoll_pwait_wrapper)
+SYSCALL(sys_utimes,sys_utimes,compat_sys_utimes_wrapper)
continue;
}
+ if (bar_value < *lower_limit || (bar_value + bar_size) >= *upper_limit) {
+ DBG(" unavailable -- skipping, value %x size %x\n",
+ bar_value, bar_size);
+ continue;
+ }
+
#ifdef CONFIG_PCI_AUTO_UPDATE_RESOURCES
/* Write it out and update our limit */
early_write_config_dword(hose, top_bus, current_bus, pci_devfn,
*
* CPU init code
*
- * Copyright (C) 2002 - 2006 Paul Mundt
+ * Copyright (C) 2002 - 2007 Paul Mundt
* Copyright (C) 2003 Richard Curnow
*
* This file is subject to the terms and conditions of the GNU General Public
{
unsigned long ccr, flags;
- if (current_cpu_data.type == CPU_SH_NONE)
- panic("Unknown CPU");
+ /* First setup the rest of the I-cache info */
+ current_cpu_data.icache.entry_mask = current_cpu_data.icache.way_incr -
+ current_cpu_data.icache.linesz;
+
+ current_cpu_data.icache.way_size = current_cpu_data.icache.sets *
+ current_cpu_data.icache.linesz;
+
+ /* And the D-cache too */
+ current_cpu_data.dcache.entry_mask = current_cpu_data.dcache.way_incr -
+ current_cpu_data.dcache.linesz;
+
+ current_cpu_data.dcache.way_size = current_cpu_data.dcache.sets *
+ current_cpu_data.dcache.linesz;
jump_to_P2();
ccr = ctrl_inl(CCR);
/* First, probe the CPU */
detect_cpu_and_cache_system();
+ if (current_cpu_data.type == CPU_SH_NONE)
+ panic("Unknown CPU");
+
/* Init the cache */
cache_init();
interrupt_entry:
mov r9,r4
+ mov r15,r5
mov.l 6f,r9
mov.l 7f,r8
jmp @r8
interrupt_exception:
mov.l 1f, r9
+ mov.l 2f, r4
+ mov.l @r4, r4
jmp @r9
- nop
+ mov r15, r5
rts
nop
.align 2
1: .long do_IRQ
+2: .long INTEVT
.align 2
ENTRY(exception_none)
}
- /* Setup the rest of the I-cache info */
- current_cpu_data.icache.entry_mask = current_cpu_data.icache.way_incr -
- current_cpu_data.icache.linesz;
-
- current_cpu_data.icache.way_size = current_cpu_data.icache.sets *
- current_cpu_data.icache.linesz;
-
/* And the rest of the D-cache */
if (current_cpu_data.dcache.ways > 1) {
size = sizes[(cvr >> 16) & 0xf];
current_cpu_data.dcache.sets = (size >> 6);
}
- current_cpu_data.dcache.entry_mask = current_cpu_data.dcache.way_incr -
- current_cpu_data.dcache.linesz;
-
- current_cpu_data.dcache.way_size = current_cpu_data.dcache.sets *
- current_cpu_data.dcache.linesz;
-
/*
* Setup the L2 cache desc
*
#include <linux/module.h>
#include <linux/kernel_stat.h>
#include <linux/seq_file.h>
-#include <linux/io.h>
#include <linux/irq.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
static union irq_ctx *softirq_ctx[NR_CPUS] __read_mostly;
#endif
-asmlinkage int do_IRQ(unsigned long r4, unsigned long r5,
- unsigned long r6, unsigned long r7,
- struct pt_regs __regs)
+asmlinkage int do_IRQ(unsigned int irq, struct pt_regs *regs)
{
- struct pt_regs *regs = RELOC_HIDE(&__regs, 0);
struct pt_regs *old_regs = set_irq_regs(regs);
- int irq;
#ifdef CONFIG_4KSTACKS
union irq_ctx *curctx, *irqctx;
#endif
}
#endif
-#ifdef CONFIG_CPU_HAS_INTEVT
- irq = evt2irq(ctrl_inl(INTEVT));
-#else
- irq = r4;
-#endif
-
- irq = irq_demux(irq);
+ irq = irq_demux(evt2irq(irq));
#ifdef CONFIG_4KSTACKS
curctx = (union irq_ctx *)current_thread_info();
DECLARE_EXPORT(__movmem_i4_even);
DECLARE_EXPORT(__movmem_i4_odd);
DECLARE_EXPORT(__movmemSI12_i4);
-DECLARE_EXPORT(__sdivsi3_i4i);
-DECLARE_EXPORT(__udiv_qrnnd_16);
-DECLARE_EXPORT(__udivsi3_i4i);
#else /* GCC 3.x */
DECLARE_EXPORT(__movstr_i4_even);
DECLARE_EXPORT(__movstr_i4_odd);
/*
* Normally called from {do_}pci_scan_bus...
*/
-void __init pcibios_fixup_bus(struct pci_bus *bus)
+void __devinit pcibios_fixup_bus(struct pci_bus *bus)
{
struct pci_dev *dev;
int i, has_io, has_mem;
/*
* Other archs parse arguments here.
*/
-char * __init pcibios_setup(char *str)
+char * __devinit pcibios_setup(char *str)
{
return str;
}
#ifndef CONFIG_SMP
if(last_task_used_math == current) {
#else
- if(current_thread_info()->flags & _TIF_USEDFPU) {
+ if (test_thread_flag(TIF_USEDFPU)) {
#endif
/* Keep process from leaving FPU in a bogon state. */
put_psr(get_psr() | PSR_EF);
#ifndef CONFIG_SMP
last_task_used_math = NULL;
#else
- current_thread_info()->flags &= ~_TIF_USEDFPU;
+ clear_thread_flag(TIF_USEDFPU);
#endif
}
}
#ifndef CONFIG_SMP
if(last_task_used_math == current) {
#else
- if(current_thread_info()->flags & _TIF_USEDFPU) {
+ if (test_thread_flag(TIF_USEDFPU)) {
#endif
/* Clean the fpu. */
put_psr(get_psr() | PSR_EF);
#ifndef CONFIG_SMP
last_task_used_math = NULL;
#else
- current_thread_info()->flags &= ~_TIF_USEDFPU;
+ clear_thread_flag(TIF_USEDFPU);
#endif
}
#ifndef CONFIG_SMP
if(last_task_used_math == current) {
#else
- if(current_thread_info()->flags & _TIF_USEDFPU) {
+ if (test_thread_flag(TIF_USEDFPU)) {
#endif
put_psr(get_psr() | PSR_EF);
fpsave(&p->thread.float_regs[0], &p->thread.fsr,
&p->thread.fpqueue[0], &p->thread.fpqdepth);
#ifdef CONFIG_SMP
- current_thread_info()->flags &= ~_TIF_USEDFPU;
+ clear_thread_flag(TIF_USEDFPU);
#endif
}
return 1;
}
#ifdef CONFIG_SMP
- if (current_thread_info()->flags & _TIF_USEDFPU) {
+ if (test_thread_flag(TIF_USEDFPU)) {
put_psr(get_psr() | PSR_EF);
fpsave(¤t->thread.float_regs[0], ¤t->thread.fsr,
¤t->thread.fpqueue[0], ¤t->thread.fpqdepth);
if (regs != NULL) {
regs->psr &= ~(PSR_EF);
- current_thread_info()->flags &= ~(_TIF_USEDFPU);
+ clear_thread_flag(TIF_USEDFPU);
}
}
#else
ret = ARG_MAX;
break;
case _SC_CHILD_MAX:
- ret = -1; /* no limit */
+ ret = current->signal->rlim[RLIMIT_NPROC].rlim_cur;
break;
case _SC_CLK_TCK:
ret = HZ;
ret = NGROUPS_MAX;
break;
case _SC_OPEN_MAX:
- ret = OPEN_MAX;
+ ret = current->signal->rlim[RLIMIT_NOFILE].rlim_cur;
break;
case _SC_JOB_CONTROL:
ret = 1; /* yes, we do support job control */
/*285*/ .long sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*290*/ .long sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
/*295*/ .long sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare
-/*300*/ .long sys_set_robust_list, sys_get_robust_list, sys_migrate_pages
+/*300*/ .long sys_set_robust_list, sys_get_robust_list, sys_migrate_pages, sys_mbind, sys_get_mempolicy
+/*305*/ .long sys_set_mempolicy, sys_kexec_load, sys_move_pages, sys_getcpu, sys_epoll_pwait
#ifdef CONFIG_SUNOS_EMUL
/* Now the SunOS syscall table. */
.long sunos_nosys, sunos_nosys, sunos_nosys
.long sunos_nosys
/*300*/ .long sunos_nosys, sunos_nosys, sunos_nosys
+ .long sunos_nosys, sunos_nosys, sunos_nosys
+ .long sunos_nosys, sunos_nosys, sunos_nosys
+ .long sunos_nosys
#endif
} else {
fpload(¤t->thread.float_regs[0], ¤t->thread.fsr);
}
- current_thread_info()->flags |= _TIF_USEDFPU;
+ set_thread_flag(TIF_USEDFPU);
#endif
}
#ifndef CONFIG_SMP
if(!fpt) {
#else
- if(!(task_thread_info(fpt)->flags & _TIF_USEDFPU)) {
+ if (!test_tsk_thread_flag(fpt, TIF_USEDFPU)) {
#endif
fpsave(&fake_regs[0], &fake_fsr, &fake_queue[0], &fake_depth);
regs->psr &= ~PSR_EF;
/* nope, better SIGFPE the offending process... */
#ifdef CONFIG_SMP
- task_thread_info(fpt)->flags &= ~_TIF_USEDFPU;
+ clear_tsk_thread_flag(fpt, TIF_USEDFPU);
#endif
if(psr & PSR_PS) {
/* The first fsr store/load we tried trapped,
spin_unlock_irqrestore(ATOMIC_HASH(v), flags);
return ret;
}
+EXPORT_SYMBOL(atomic_cmpxchg);
int atomic_add_unless(atomic_t *v, int a, int u)
{
spin_unlock_irqrestore(ATOMIC_HASH(v), flags);
return ret != u;
}
+EXPORT_SYMBOL(atomic_add_unless);
/* Atomic operations are already serializing */
void atomic_set(atomic_t *v, int i)
printk("Free swap: %6ldkB\n",
nr_swap_pages << (PAGE_SHIFT-10));
printk("%ld pages of RAM\n", totalram_pages);
- printk("%d free pages\n", nr_free_pages());
+ printk("%ld free pages\n", nr_free_pages());
#if 0 /* undefined pgtable_cache_size, pgd_cache_size */
printk("%ld pages in page table cache\n",pgtable_cache_size);
#ifndef CONFIG_SMP
If you don't know what to do here, say N.
-config PREEMPT
- bool "Preemptible Kernel"
- help
- This option reduces the latency of the kernel when reacting to
- real-time or interactive events by allowing a low priority process to
- be preempted even if it is in kernel mode executing a system call.
- This allows applications to run more reliably even when the system is
- under load.
-
- Say Y here if you are building a kernel for a desktop, embedded
- or real-time system. Say N if you are unsure.
-
config NR_CPUS
int "Maximum number of CPUs (2-64)"
range 2 64
when dealing with UltraSPARC cpus at a cost of slightly increased
overhead in some places. If unsure say N here.
+source "kernel/Kconfig.preempt"
+
config CMDLINE_BOOL
bool "Default bootloader kernel arguments"
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.21-rc2
-# Wed Feb 28 09:50:51 2007
+# Linux kernel version: 2.6.21-rc4
+# Sat Mar 17 14:18:44 2007
#
CONFIG_SPARC=y
CONFIG_SPARC64=y
# CONFIG_IKCONFIG is not set
CONFIG_SYSFS_DEPRECATED=y
CONFIG_RELAY=y
+# CONFIG_BLK_DEV_INITRD is not set
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
# CONFIG_EMBEDDED is not set
# General machine setup
#
# CONFIG_SMP is not set
-# CONFIG_PREEMPT is not set
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_TABLE=m
# CONFIG_CPU_FREQ_DEBUG is not set
CONFIG_BINFMT_ELF=y
CONFIG_BINFMT_MISC=m
CONFIG_SOLARIS_EMUL=y
+# CONFIG_PREEMPT_NONE is not set
+CONFIG_PREEMPT_VOLUNTARY=y
+# CONFIG_PREEMPT is not set
# CONFIG_CMDLINE_BOOL is not set
#
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_UB is not set
# CONFIG_BLK_DEV_RAM is not set
-# CONFIG_BLK_DEV_INITRD is not set
CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8
CONFIG_CDROM_PKTCDVD_WCACHE=y
brgez,pn %g4, kvmap_dtlb_nonlinear
nop
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ /* Index through the base page size TSB even for linear
+ * mappings when using page allocation debugging.
+ */
+ KERN_TSB_LOOKUP_TL1(%g4, %g6, %g5, %g1, %g2, %g3, kvmap_dtlb_load)
+#else
/* Correct TAG_TARGET is already in %g6, check 4mb TSB. */
KERN_TSB4M_LOOKUP_TL1(%g6, %g5, %g1, %g2, %g3, kvmap_dtlb_load)
-
+#endif
/* TSB entry address left in %g1, lookup linear PTE.
* Must preserve %g1 and %g6 (TAG).
*/
subsys_initcall(pcibios_init);
-void pcibios_fixup_bus(struct pci_bus *pbus)
+void __devinit pcibios_fixup_bus(struct pci_bus *pbus)
{
struct pci_pbm_info *pbm = pbus->sysdata;
}
EXPORT_SYMBOL(pcibios_bus_to_resource);
-char * __init pcibios_setup(char *str)
+char * __devinit pcibios_setup(char *str)
{
return str;
}
#define IOPTE_IS_DUMMY(iommu, iopte) \
((iopte_val(*iopte) & IOPTE_PAGE) == (iommu)->dummy_page_pa)
-static void inline iopte_make_dummy(struct pci_iommu *iommu, iopte_t *iopte)
+static inline void iopte_make_dummy(struct pci_iommu *iommu, iopte_t *iopte)
{
unsigned long val = iopte_val(*iopte);
struct thread_info *t = current_thread_info();
struct mm_struct *mm;
- if (t->flags & _TIF_ABI_PENDING)
- t->flags ^= (_TIF_ABI_PENDING | _TIF_32BIT);
+ if (test_ti_thread_flag(t, TIF_ABI_PENDING)) {
+ clear_ti_thread_flag(t, TIF_ABI_PENDING);
+ if (test_ti_thread_flag(t, TIF_32BIT))
+ clear_ti_thread_flag(t, TIF_32BIT);
+ else
+ set_ti_thread_flag(t, TIF_32BIT);
+ }
mm = t->task->mm;
if (mm)
#include "iommu_common.h"
-/* These should be allocated on an SMP_CACHE_BYTES
- * aligned boundary for optimal performance.
- *
- * On SYSIO, using an 8K page size we have 1GB of SBUS
- * DMA space mapped. We divide this space into equally
- * sized clusters. We allocate a DMA mapping from the
- * cluster that matches the order of the allocation, or
- * if the order is greater than the number of clusters,
- * we try to allocate from the last cluster.
- */
-
-#define NCLUSTERS 8UL
-#define ONE_GIG (1UL * 1024UL * 1024UL * 1024UL)
-#define CLUSTER_SIZE (ONE_GIG / NCLUSTERS)
-#define CLUSTER_MASK (CLUSTER_SIZE - 1)
-#define CLUSTER_NPAGES (CLUSTER_SIZE >> IO_PAGE_SHIFT)
#define MAP_BASE ((u32)0xc0000000)
+struct sbus_iommu_arena {
+ unsigned long *map;
+ unsigned int hint;
+ unsigned int limit;
+};
+
struct sbus_iommu {
-/*0x00*/spinlock_t lock;
+ spinlock_t lock;
-/*0x08*/iopte_t *page_table;
-/*0x10*/unsigned long strbuf_regs;
-/*0x18*/unsigned long iommu_regs;
-/*0x20*/unsigned long sbus_control_reg;
+ struct sbus_iommu_arena arena;
-/*0x28*/volatile unsigned long strbuf_flushflag;
+ iopte_t *page_table;
+ unsigned long strbuf_regs;
+ unsigned long iommu_regs;
+ unsigned long sbus_control_reg;
- /* If NCLUSTERS is ever decresed to 4 or lower,
- * you must increase the size of the type of
- * these counters. You have been duly warned. -DaveM
- */
-/*0x30*/struct {
- u16 next;
- u16 flush;
- } alloc_info[NCLUSTERS];
-
- /* The lowest used consistent mapping entry. Since
- * we allocate consistent maps out of cluster 0 this
- * is relative to the beginning of closter 0.
- */
-/*0x50*/u32 lowest_consistent_map;
+ volatile unsigned long strbuf_flushflag;
};
/* Offsets from iommu_regs */
tag += 8UL;
}
upa_readq(iommu->sbus_control_reg);
-
- for (entry = 0; entry < NCLUSTERS; entry++) {
- iommu->alloc_info[entry].flush =
- iommu->alloc_info[entry].next;
- }
-}
-
-static void iommu_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages)
-{
- while (npages--)
- upa_writeq(base + (npages << IO_PAGE_SHIFT),
- iommu->iommu_regs + IOMMU_FLUSH);
- upa_readq(iommu->sbus_control_reg);
}
/* Offsets from strbuf_regs */
base, npages);
}
-static iopte_t *alloc_streaming_cluster(struct sbus_iommu *iommu, unsigned long npages)
+/* Based largely upon the ppc64 iommu allocator. */
+static long sbus_arena_alloc(struct sbus_iommu *iommu, unsigned long npages)
{
- iopte_t *iopte, *limit, *first, *cluster;
- unsigned long cnum, ent, nent, flush_point, found;
-
- cnum = 0;
- nent = 1;
- while ((1UL << cnum) < npages)
- cnum++;
- if(cnum >= NCLUSTERS) {
- nent = 1UL << (cnum - NCLUSTERS);
- cnum = NCLUSTERS - 1;
- }
- iopte = iommu->page_table + (cnum * CLUSTER_NPAGES);
-
- if (cnum == 0)
- limit = (iommu->page_table +
- iommu->lowest_consistent_map);
- else
- limit = (iopte + CLUSTER_NPAGES);
-
- iopte += ((ent = iommu->alloc_info[cnum].next) << cnum);
- flush_point = iommu->alloc_info[cnum].flush;
-
- first = iopte;
- cluster = NULL;
- found = 0;
- for (;;) {
- if (iopte_val(*iopte) == 0UL) {
- found++;
- if (!cluster)
- cluster = iopte;
+ struct sbus_iommu_arena *arena = &iommu->arena;
+ unsigned long n, i, start, end, limit;
+ int pass;
+
+ limit = arena->limit;
+ start = arena->hint;
+ pass = 0;
+
+again:
+ n = find_next_zero_bit(arena->map, limit, start);
+ end = n + npages;
+ if (unlikely(end >= limit)) {
+ if (likely(pass < 1)) {
+ limit = start;
+ start = 0;
+ __iommu_flushall(iommu);
+ pass++;
+ goto again;
} else {
- /* Used cluster in the way */
- cluster = NULL;
- found = 0;
+ /* Scanned the whole thing, give up. */
+ return -1;
}
+ }
- if (found == nent)
- break;
-
- iopte += (1 << cnum);
- ent++;
- if (iopte >= limit) {
- iopte = (iommu->page_table + (cnum * CLUSTER_NPAGES));
- ent = 0;
-
- /* Multiple cluster allocations must not wrap */
- cluster = NULL;
- found = 0;
+ for (i = n; i < end; i++) {
+ if (test_bit(i, arena->map)) {
+ start = i + 1;
+ goto again;
}
- if (ent == flush_point)
- __iommu_flushall(iommu);
- if (iopte == first)
- goto bad;
}
- /* ent/iopte points to the last cluster entry we're going to use,
- * so save our place for the next allocation.
- */
- if ((iopte + (1 << cnum)) >= limit)
- ent = 0;
- else
- ent = ent + 1;
- iommu->alloc_info[cnum].next = ent;
- if (ent == flush_point)
- __iommu_flushall(iommu);
-
- /* I've got your streaming cluster right here buddy boy... */
- return cluster;
-
-bad:
- printk(KERN_EMERG "sbus: alloc_streaming_cluster of npages(%ld) failed!\n",
- npages);
- return NULL;
+ for (i = n; i < end; i++)
+ __set_bit(i, arena->map);
+
+ arena->hint = end;
+
+ return n;
}
-static void free_streaming_cluster(struct sbus_iommu *iommu, u32 base, unsigned long npages)
+static void sbus_arena_free(struct sbus_iommu_arena *arena, unsigned long base, unsigned long npages)
{
- unsigned long cnum, ent, nent;
- iopte_t *iopte;
+ unsigned long i;
- cnum = 0;
- nent = 1;
- while ((1UL << cnum) < npages)
- cnum++;
- if(cnum >= NCLUSTERS) {
- nent = 1UL << (cnum - NCLUSTERS);
- cnum = NCLUSTERS - 1;
- }
- ent = (base & CLUSTER_MASK) >> (IO_PAGE_SHIFT + cnum);
- iopte = iommu->page_table + ((base - MAP_BASE) >> IO_PAGE_SHIFT);
- do {
- iopte_val(*iopte) = 0UL;
- iopte += 1 << cnum;
- } while(--nent);
-
- /* If the global flush might not have caught this entry,
- * adjust the flush point such that we will flush before
- * ever trying to reuse it.
- */
-#define between(X,Y,Z) (((Z) - (Y)) >= ((X) - (Y)))
- if (between(ent, iommu->alloc_info[cnum].next, iommu->alloc_info[cnum].flush))
- iommu->alloc_info[cnum].flush = ent;
-#undef between
+ for (i = base; i < (base + npages); i++)
+ __clear_bit(i, arena->map);
}
-/* We allocate consistent mappings from the end of cluster zero. */
-static iopte_t *alloc_consistent_cluster(struct sbus_iommu *iommu, unsigned long npages)
+static void sbus_iommu_table_init(struct sbus_iommu *iommu, unsigned int tsbsize)
{
- iopte_t *iopte;
+ unsigned long tsbbase, order, sz, num_tsb_entries;
- iopte = iommu->page_table + (1 * CLUSTER_NPAGES);
- while (iopte > iommu->page_table) {
- iopte--;
- if (!(iopte_val(*iopte) & IOPTE_VALID)) {
- unsigned long tmp = npages;
+ num_tsb_entries = tsbsize / sizeof(iopte_t);
- while (--tmp) {
- iopte--;
- if (iopte_val(*iopte) & IOPTE_VALID)
- break;
- }
- if (tmp == 0) {
- u32 entry = (iopte - iommu->page_table);
+ /* Setup initial software IOMMU state. */
+ spin_lock_init(&iommu->lock);
- if (entry < iommu->lowest_consistent_map)
- iommu->lowest_consistent_map = entry;
- return iopte;
- }
- }
+ /* Allocate and initialize the free area map. */
+ sz = num_tsb_entries / 8;
+ sz = (sz + 7UL) & ~7UL;
+ iommu->arena.map = kzalloc(sz, GFP_KERNEL);
+ if (!iommu->arena.map) {
+ prom_printf("PCI_IOMMU: Error, kmalloc(arena.map) failed.\n");
+ prom_halt();
+ }
+ iommu->arena.limit = num_tsb_entries;
+
+ /* Now allocate and setup the IOMMU page table itself. */
+ order = get_order(tsbsize);
+ tsbbase = __get_free_pages(GFP_KERNEL, order);
+ if (!tsbbase) {
+ prom_printf("IOMMU: Error, gfp(tsb) failed.\n");
+ prom_halt();
}
- return NULL;
+ iommu->page_table = (iopte_t *)tsbbase;
+ memset(iommu->page_table, 0, tsbsize);
}
-static void free_consistent_cluster(struct sbus_iommu *iommu, u32 base, unsigned long npages)
+static inline iopte_t *alloc_npages(struct sbus_iommu *iommu, unsigned long npages)
{
- iopte_t *iopte = iommu->page_table + ((base - MAP_BASE) >> IO_PAGE_SHIFT);
+ long entry;
- if ((iopte - iommu->page_table) == iommu->lowest_consistent_map) {
- iopte_t *walk = iopte + npages;
- iopte_t *limit;
+ entry = sbus_arena_alloc(iommu, npages);
+ if (unlikely(entry < 0))
+ return NULL;
- limit = iommu->page_table + CLUSTER_NPAGES;
- while (walk < limit) {
- if (iopte_val(*walk) != 0UL)
- break;
- walk++;
- }
- iommu->lowest_consistent_map =
- (walk - iommu->page_table);
- }
+ return iommu->page_table + entry;
+}
- while (npages--)
- *iopte++ = __iopte(0UL);
+static inline void free_npages(struct sbus_iommu *iommu, dma_addr_t base, unsigned long npages)
+{
+ sbus_arena_free(&iommu->arena, base >> IO_PAGE_SHIFT, npages);
}
void *sbus_alloc_consistent(struct sbus_dev *sdev, size_t size, dma_addr_t *dvma_addr)
{
- unsigned long order, first_page, flags;
struct sbus_iommu *iommu;
iopte_t *iopte;
+ unsigned long flags, order, first_page;
void *ret;
int npages;
- if (size <= 0 || sdev == NULL || dvma_addr == NULL)
- return NULL;
-
size = IO_PAGE_ALIGN(size);
order = get_order(size);
if (order >= 10)
return NULL;
+
first_page = __get_free_pages(GFP_KERNEL|__GFP_COMP, order);
if (first_page == 0UL)
return NULL;
iommu = sdev->bus->iommu;
spin_lock_irqsave(&iommu->lock, flags);
- iopte = alloc_consistent_cluster(iommu, size >> IO_PAGE_SHIFT);
- if (iopte == NULL) {
- spin_unlock_irqrestore(&iommu->lock, flags);
+ iopte = alloc_npages(iommu, size >> IO_PAGE_SHIFT);
+ spin_unlock_irqrestore(&iommu->lock, flags);
+
+ if (unlikely(iopte == NULL)) {
free_pages(first_page, order);
return NULL;
}
- /* Ok, we're committed at this point. */
- *dvma_addr = MAP_BASE + ((iopte - iommu->page_table) << IO_PAGE_SHIFT);
+ *dvma_addr = (MAP_BASE +
+ ((iopte - iommu->page_table) << IO_PAGE_SHIFT));
ret = (void *) first_page;
npages = size >> IO_PAGE_SHIFT;
+ first_page = __pa(first_page);
while (npages--) {
- *iopte++ = __iopte(IOPTE_VALID | IOPTE_CACHE | IOPTE_WRITE |
- (__pa(first_page) & IOPTE_PAGE));
+ iopte_val(*iopte) = (IOPTE_VALID | IOPTE_CACHE |
+ IOPTE_WRITE |
+ (first_page & IOPTE_PAGE));
+ iopte++;
first_page += IO_PAGE_SIZE;
}
- iommu_flush(iommu, *dvma_addr, size >> IO_PAGE_SHIFT);
- spin_unlock_irqrestore(&iommu->lock, flags);
return ret;
}
void sbus_free_consistent(struct sbus_dev *sdev, size_t size, void *cpu, dma_addr_t dvma)
{
- unsigned long order, npages;
struct sbus_iommu *iommu;
-
- if (size <= 0 || sdev == NULL || cpu == NULL)
- return;
+ iopte_t *iopte;
+ unsigned long flags, order, npages;
npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT;
iommu = sdev->bus->iommu;
+ iopte = iommu->page_table +
+ ((dvma - MAP_BASE) >> IO_PAGE_SHIFT);
+
+ spin_lock_irqsave(&iommu->lock, flags);
+
+ free_npages(iommu, dvma - MAP_BASE, npages);
- spin_lock_irq(&iommu->lock);
- free_consistent_cluster(iommu, dvma, npages);
- iommu_flush(iommu, dvma, npages);
- spin_unlock_irq(&iommu->lock);
+ spin_unlock_irqrestore(&iommu->lock, flags);
order = get_order(size);
if (order < 10)
free_pages((unsigned long)cpu, order);
}
-dma_addr_t sbus_map_single(struct sbus_dev *sdev, void *ptr, size_t size, int dir)
+dma_addr_t sbus_map_single(struct sbus_dev *sdev, void *ptr, size_t sz, int direction)
{
- struct sbus_iommu *iommu = sdev->bus->iommu;
- unsigned long npages, pbase, flags;
- iopte_t *iopte;
- u32 dma_base, offset;
- unsigned long iopte_bits;
+ struct sbus_iommu *iommu;
+ iopte_t *base;
+ unsigned long flags, npages, oaddr;
+ unsigned long i, base_paddr;
+ u32 bus_addr, ret;
+ unsigned long iopte_protection;
+
+ iommu = sdev->bus->iommu;
- if (dir == SBUS_DMA_NONE)
+ if (unlikely(direction == SBUS_DMA_NONE))
BUG();
- pbase = (unsigned long) ptr;
- offset = (u32) (pbase & ~IO_PAGE_MASK);
- size = (IO_PAGE_ALIGN(pbase + size) - (pbase & IO_PAGE_MASK));
- pbase = (unsigned long) __pa(pbase & IO_PAGE_MASK);
+ oaddr = (unsigned long)ptr;
+ npages = IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK);
+ npages >>= IO_PAGE_SHIFT;
spin_lock_irqsave(&iommu->lock, flags);
- npages = size >> IO_PAGE_SHIFT;
- iopte = alloc_streaming_cluster(iommu, npages);
- if (iopte == NULL)
- goto bad;
- dma_base = MAP_BASE + ((iopte - iommu->page_table) << IO_PAGE_SHIFT);
- npages = size >> IO_PAGE_SHIFT;
- iopte_bits = IOPTE_VALID | IOPTE_STBUF | IOPTE_CACHE;
- if (dir != SBUS_DMA_TODEVICE)
- iopte_bits |= IOPTE_WRITE;
- while (npages--) {
- *iopte++ = __iopte(iopte_bits | (pbase & IOPTE_PAGE));
- pbase += IO_PAGE_SIZE;
- }
- npages = size >> IO_PAGE_SHIFT;
+ base = alloc_npages(iommu, npages);
spin_unlock_irqrestore(&iommu->lock, flags);
- return (dma_base | offset);
+ if (unlikely(!base))
+ BUG();
-bad:
- spin_unlock_irqrestore(&iommu->lock, flags);
- BUG();
- return 0;
+ bus_addr = (MAP_BASE +
+ ((base - iommu->page_table) << IO_PAGE_SHIFT));
+ ret = bus_addr | (oaddr & ~IO_PAGE_MASK);
+ base_paddr = __pa(oaddr & IO_PAGE_MASK);
+
+ iopte_protection = IOPTE_VALID | IOPTE_STBUF | IOPTE_CACHE;
+ if (direction != SBUS_DMA_TODEVICE)
+ iopte_protection |= IOPTE_WRITE;
+
+ for (i = 0; i < npages; i++, base++, base_paddr += IO_PAGE_SIZE)
+ iopte_val(*base) = iopte_protection | base_paddr;
+
+ return ret;
}
-void sbus_unmap_single(struct sbus_dev *sdev, dma_addr_t dma_addr, size_t size, int direction)
+void sbus_unmap_single(struct sbus_dev *sdev, dma_addr_t bus_addr, size_t sz, int direction)
{
struct sbus_iommu *iommu = sdev->bus->iommu;
- u32 dma_base = dma_addr & IO_PAGE_MASK;
- unsigned long flags;
+ iopte_t *base;
+ unsigned long flags, npages, i;
+
+ if (unlikely(direction == SBUS_DMA_NONE))
+ BUG();
+
+ npages = IO_PAGE_ALIGN(bus_addr + sz) - (bus_addr & IO_PAGE_MASK);
+ npages >>= IO_PAGE_SHIFT;
+ base = iommu->page_table +
+ ((bus_addr - MAP_BASE) >> IO_PAGE_SHIFT);
- size = (IO_PAGE_ALIGN(dma_addr + size) - dma_base);
+ bus_addr &= IO_PAGE_MASK;
spin_lock_irqsave(&iommu->lock, flags);
- free_streaming_cluster(iommu, dma_base, size >> IO_PAGE_SHIFT);
- sbus_strbuf_flush(iommu, dma_base, size >> IO_PAGE_SHIFT, direction);
+ sbus_strbuf_flush(iommu, bus_addr, npages, direction);
+ for (i = 0; i < npages; i++)
+ iopte_val(base[i]) = 0UL;
+ free_npages(iommu, bus_addr - MAP_BASE, npages);
spin_unlock_irqrestore(&iommu->lock, flags);
}
#define SG_ENT_PHYS_ADDRESS(SG) \
(__pa(page_address((SG)->page)) + (SG)->offset)
-static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg, int nused, int nelems, unsigned long iopte_bits)
+static inline void fill_sg(iopte_t *iopte, struct scatterlist *sg,
+ int nused, int nelems, unsigned long iopte_protection)
{
struct scatterlist *dma_sg = sg;
struct scatterlist *sg_end = sg + nelems;
for (;;) {
unsigned long tmp;
- tmp = (unsigned long) SG_ENT_PHYS_ADDRESS(sg);
+ tmp = SG_ENT_PHYS_ADDRESS(sg);
len = sg->length;
if (((tmp ^ pteval) >> IO_PAGE_SHIFT) != 0UL) {
pteval = tmp & IO_PAGE_MASK;
sg++;
}
- pteval = ((pteval & IOPTE_PAGE) | iopte_bits);
+ pteval = iopte_protection | (pteval & IOPTE_PAGE);
while (len > 0) {
*iopte++ = __iopte(pteval);
pteval += IO_PAGE_SIZE;
}
}
-int sbus_map_sg(struct sbus_dev *sdev, struct scatterlist *sg, int nents, int dir)
+int sbus_map_sg(struct sbus_dev *sdev, struct scatterlist *sglist, int nelems, int direction)
{
- struct sbus_iommu *iommu = sdev->bus->iommu;
- unsigned long flags, npages;
- iopte_t *iopte;
+ struct sbus_iommu *iommu;
+ unsigned long flags, npages, iopte_protection;
+ iopte_t *base;
u32 dma_base;
struct scatterlist *sgtmp;
int used;
- unsigned long iopte_bits;
-
- if (dir == SBUS_DMA_NONE)
- BUG();
/* Fast path single entry scatterlists. */
- if (nents == 1) {
- sg->dma_address =
+ if (nelems == 1) {
+ sglist->dma_address =
sbus_map_single(sdev,
- (page_address(sg->page) + sg->offset),
- sg->length, dir);
- sg->dma_length = sg->length;
+ (page_address(sglist->page) + sglist->offset),
+ sglist->length, direction);
+ sglist->dma_length = sglist->length;
return 1;
}
- npages = prepare_sg(sg, nents);
+ iommu = sdev->bus->iommu;
+
+ if (unlikely(direction == SBUS_DMA_NONE))
+ BUG();
+
+ npages = prepare_sg(sglist, nelems);
spin_lock_irqsave(&iommu->lock, flags);
- iopte = alloc_streaming_cluster(iommu, npages);
- if (iopte == NULL)
- goto bad;
- dma_base = MAP_BASE + ((iopte - iommu->page_table) << IO_PAGE_SHIFT);
+ base = alloc_npages(iommu, npages);
+ spin_unlock_irqrestore(&iommu->lock, flags);
+
+ if (unlikely(base == NULL))
+ BUG();
+
+ dma_base = MAP_BASE +
+ ((base - iommu->page_table) << IO_PAGE_SHIFT);
/* Normalize DVMA addresses. */
- sgtmp = sg;
- used = nents;
+ used = nelems;
+ sgtmp = sglist;
while (used && sgtmp->dma_length) {
sgtmp->dma_address += dma_base;
sgtmp++;
used--;
}
- used = nents - used;
+ used = nelems - used;
- iopte_bits = IOPTE_VALID | IOPTE_STBUF | IOPTE_CACHE;
- if (dir != SBUS_DMA_TODEVICE)
- iopte_bits |= IOPTE_WRITE;
+ iopte_protection = IOPTE_VALID | IOPTE_STBUF | IOPTE_CACHE;
+ if (direction != SBUS_DMA_TODEVICE)
+ iopte_protection |= IOPTE_WRITE;
+
+ fill_sg(base, sglist, used, nelems, iopte_protection);
- fill_sg(iopte, sg, used, nents, iopte_bits);
#ifdef VERIFY_SG
- verify_sglist(sg, nents, iopte, npages);
+ verify_sglist(sglist, nelems, base, npages);
#endif
- spin_unlock_irqrestore(&iommu->lock, flags);
return used;
-
-bad:
- spin_unlock_irqrestore(&iommu->lock, flags);
- BUG();
- return 0;
}
-void sbus_unmap_sg(struct sbus_dev *sdev, struct scatterlist *sg, int nents, int direction)
+void sbus_unmap_sg(struct sbus_dev *sdev, struct scatterlist *sglist, int nelems, int direction)
{
- unsigned long size, flags;
struct sbus_iommu *iommu;
- u32 dvma_base;
- int i;
+ iopte_t *base;
+ unsigned long flags, i, npages;
+ u32 bus_addr;
- /* Fast path single entry scatterlists. */
- if (nents == 1) {
- sbus_unmap_single(sdev, sg->dma_address, sg->dma_length, direction);
- return;
- }
+ if (unlikely(direction == SBUS_DMA_NONE))
+ BUG();
+
+ iommu = sdev->bus->iommu;
+
+ bus_addr = sglist->dma_address & IO_PAGE_MASK;
- dvma_base = sg[0].dma_address & IO_PAGE_MASK;
- for (i = 0; i < nents; i++) {
- if (sg[i].dma_length == 0)
+ for (i = 1; i < nelems; i++)
+ if (sglist[i].dma_length == 0)
break;
- }
i--;
- size = IO_PAGE_ALIGN(sg[i].dma_address + sg[i].dma_length) - dvma_base;
+ npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) -
+ bus_addr) >> IO_PAGE_SHIFT;
+
+ base = iommu->page_table +
+ ((bus_addr - MAP_BASE) >> IO_PAGE_SHIFT);
- iommu = sdev->bus->iommu;
spin_lock_irqsave(&iommu->lock, flags);
- free_streaming_cluster(iommu, dvma_base, size >> IO_PAGE_SHIFT);
- sbus_strbuf_flush(iommu, dvma_base, size >> IO_PAGE_SHIFT, direction);
+ sbus_strbuf_flush(iommu, bus_addr, npages, direction);
+ for (i = 0; i < npages; i++)
+ iopte_val(base[i]) = 0UL;
+ free_npages(iommu, bus_addr - MAP_BASE, npages);
spin_unlock_irqrestore(&iommu->lock, flags);
}
-void sbus_dma_sync_single_for_cpu(struct sbus_dev *sdev, dma_addr_t base, size_t size, int direction)
+void sbus_dma_sync_single_for_cpu(struct sbus_dev *sdev, dma_addr_t bus_addr, size_t sz, int direction)
{
- struct sbus_iommu *iommu = sdev->bus->iommu;
- unsigned long flags;
+ struct sbus_iommu *iommu;
+ unsigned long flags, npages;
+
+ iommu = sdev->bus->iommu;
- size = (IO_PAGE_ALIGN(base + size) - (base & IO_PAGE_MASK));
+ npages = IO_PAGE_ALIGN(bus_addr + sz) - (bus_addr & IO_PAGE_MASK);
+ npages >>= IO_PAGE_SHIFT;
+ bus_addr &= IO_PAGE_MASK;
spin_lock_irqsave(&iommu->lock, flags);
- sbus_strbuf_flush(iommu, base & IO_PAGE_MASK, size >> IO_PAGE_SHIFT, direction);
+ sbus_strbuf_flush(iommu, bus_addr, npages, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}
{
}
-void sbus_dma_sync_sg_for_cpu(struct sbus_dev *sdev, struct scatterlist *sg, int nents, int direction)
+void sbus_dma_sync_sg_for_cpu(struct sbus_dev *sdev, struct scatterlist *sglist, int nelems, int direction)
{
- struct sbus_iommu *iommu = sdev->bus->iommu;
- unsigned long flags, size;
- u32 base;
- int i;
+ struct sbus_iommu *iommu;
+ unsigned long flags, npages, i;
+ u32 bus_addr;
+
+ iommu = sdev->bus->iommu;
- base = sg[0].dma_address & IO_PAGE_MASK;
- for (i = 0; i < nents; i++) {
- if (sg[i].dma_length == 0)
+ bus_addr = sglist[0].dma_address & IO_PAGE_MASK;
+ for (i = 0; i < nelems; i++) {
+ if (!sglist[i].dma_length)
break;
}
i--;
- size = IO_PAGE_ALIGN(sg[i].dma_address + sg[i].dma_length) - base;
+ npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length)
+ - bus_addr) >> IO_PAGE_SHIFT;
spin_lock_irqsave(&iommu->lock, flags);
- sbus_strbuf_flush(iommu, base, size >> IO_PAGE_SHIFT, direction);
+ sbus_strbuf_flush(iommu, bus_addr, npages, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}
struct linux_prom64_registers *pr;
struct device_node *dp;
struct sbus_iommu *iommu;
- unsigned long regs, tsb_base;
+ unsigned long regs;
u64 control;
int i;
memset(iommu, 0, sizeof(*iommu));
- /* We start with no consistent mappings. */
- iommu->lowest_consistent_map = CLUSTER_NPAGES;
-
- for (i = 0; i < NCLUSTERS; i++) {
- iommu->alloc_info[i].flush = 0;
- iommu->alloc_info[i].next = 0;
- }
-
/* Setup spinlock. */
spin_lock_init(&iommu->lock);
sbus->portid, regs);
/* Setup for TSB_SIZE=7, TBW_SIZE=0, MMU_DE=1, MMU_EN=1 */
+ sbus_iommu_table_init(iommu, IO_TSB_SIZE);
+
control = upa_readq(iommu->iommu_regs + IOMMU_CONTROL);
control = ((7UL << 16UL) |
(0UL << 2UL) |
(1UL << 1UL) |
(1UL << 0UL));
-
- /* Using the above configuration we need 1MB iommu page
- * table (128K ioptes * 8 bytes per iopte). This is
- * page order 7 on UltraSparc.
- */
- tsb_base = __get_free_pages(GFP_ATOMIC, get_order(IO_TSB_SIZE));
- if (tsb_base == 0UL) {
- prom_printf("sbus_iommu_init: Fatal error, cannot alloc TSB table.\n");
- prom_halt();
- }
-
- iommu->page_table = (iopte_t *) tsb_base;
- memset(iommu->page_table, 0, IO_TSB_SIZE);
-
upa_writeq(control, iommu->iommu_regs + IOMMU_CONTROL);
/* Clean out any cruft in the IOMMU using
upa_readq(iommu->sbus_control_reg);
/* Give the TSB to SYSIO. */
- upa_writeq(__pa(tsb_base), iommu->iommu_regs + IOMMU_TSBBASE);
+ upa_writeq(__pa(iommu->page_table), iommu->iommu_regs + IOMMU_TSBBASE);
/* Setup streaming buffer, DE=1 SB_EN=1 */
control = (1UL << 1UL) | (1UL << 0UL);
SIGN1(sys32_mkdir, sys_mkdir, %o1)
SIGN3(sys32_futex, compat_sys_futex, %o1, %o2, %o5)
SIGN1(sys32_sysfs, compat_sys_sysfs, %o0)
-SIGN3(sys32_ipc, compat_sys_ipc, %o1, %o2, %o3)
SIGN2(sys32_sendfile, compat_sys_sendfile, %o0, %o1)
SIGN2(sys32_sendfile64, compat_sys_sendfile64, %o0, %o1)
SIGN1(sys32_prctl, sys_prctl, %o0)
ret = ARG_MAX;
break;
case _SC_CHILD_MAX:
- ret = -1; /* no limit */
+ ret = current->signal->rlim[RLIMIT_NPROC].rlim_cur;
break;
case _SC_CLK_TCK:
ret = HZ;
ret = NGROUPS_MAX;
break;
case _SC_OPEN_MAX:
- ret = OPEN_MAX;
+ ret = current->signal->rlim[RLIMIT_NOFILE].rlim_cur;
break;
case _SC_JOB_CONTROL:
ret = 1; /* yes, we do support job control */
/*200*/ .word sys32_ssetmask, sys_sigsuspend, compat_sys_newlstat, sys_uselib, compat_sys_old_readdir
.word sys32_readahead, sys32_socketcall, sys32_syslog, sys32_lookup_dcookie, sys32_fadvise64
/*210*/ .word sys32_fadvise64_64, sys32_tgkill, sys32_waitpid, sys_swapoff, compat_sys_sysinfo
- .word sys32_ipc, sys32_sigreturn, sys_clone, sys32_ioprio_get, compat_sys_adjtimex
+ .word compat_sys_ipc, sys32_sigreturn, sys_clone, sys32_ioprio_get, compat_sys_adjtimex
/*220*/ .word sys32_sigprocmask, sys_ni_syscall, sys32_delete_module, sys_ni_syscall, sys32_getpgid
.word sys32_bdflush, sys32_sysfs, sys_nis_syscall, sys32_setfsuid16, sys32_setfsgid16
/*230*/ .word sys32_select, compat_sys_time, sys32_splice, compat_sys_stime, compat_sys_statfs64
.word sys_mkdirat, sys_mknodat, sys_fchownat, compat_sys_futimesat, compat_sys_fstatat64
/*290*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, compat_sys_pselect6, compat_sys_ppoll, sys_unshare
-/*300*/ .word compat_sys_set_robust_list, compat_sys_get_robust_list, compat_sys_migrate_pages
+/*300*/ .word compat_sys_set_robust_list, compat_sys_get_robust_list, compat_sys_migrate_pages, compat_sys_mbind, compat_sys_get_mempolicy
+ .word compat_sys_set_mempolicy, compat_sys_kexec_load, compat_sys_move_pages, sys_getcpu, compat_sys_epoll_pwait
#endif /* CONFIG_COMPAT */
.word sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*290*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare
-/*300*/ .word sys_set_robust_list, sys_get_robust_list, sys_migrate_pages
+/*300*/ .word sys_set_robust_list, sys_get_robust_list, sys_migrate_pages, sys_mbind, sys_get_mempolicy
+ .word sys_set_mempolicy, sys_kexec_load, sys_move_pages, sys_getcpu, sys_epoll_pwait
#if defined(CONFIG_SUNOS_EMUL) || defined(CONFIG_SOLARIS_EMUL) || \
defined(CONFIG_SOLARIS_EMUL_MODULE)
.word sunos_nosys, sunos_nosys, sunos_nosys
.word sunos_nosys
/*300*/ .word sunos_nosys, sunos_nosys, sunos_nosys
+ .word sunos_nosys, sunos_nosys, sunos_nosys
+ .word sunos_nosys, sunos_nosys, sunos_nosys
+ .word sunos_nosys
#endif
subcc %o1, 0x100, %o1
bne,pt %xcc, 1b
add %o0, 0x100, %o0
+ membar #Sync
retl
wr %g2, 0x0, %asi
.size NGtsb_init, .-NGtsb_init
bne,pt %xcc, NGbzero_loop
add %o0, 64, %o0
+ membar #Sync
wr %o4, 0x0, %asi
brz,pn %o1, NGbzero_done
NGbzero_medium:
/* fall through */
60:
+ membar #Sync
+
/* %o2 contains any final bytes still needed to be copied
* over. If anything is left, we copy it one byte at a time.
*/
subcc %g7, 64, %g7
bne,pt %xcc, 1b
add %o0, 32, %o0
+ membar #Sync
retl
nop
subcc %g7, 64, %g7
bne,pt %xcc, 1b
add %o0, 32, %o0
+ membar #Sync
retl
nop
if (!pte_present(*ptep) && pte_present(entry))
mm->context.huge_pte_count++;
+ addr &= HPAGE_MASK;
for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
set_pte_at(mm, addr, ptep, entry);
ptep++;
if (pte_present(entry))
mm->context.huge_pte_count--;
+ addr &= HPAGE_MASK;
+
for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
pte_clear(mm, addr, ptep);
addr += PAGE_SIZE;
*/
unsigned long kpte_linear_bitmap[KPTE_BITMAP_BYTES / sizeof(unsigned long)];
+#ifndef CONFIG_DEBUG_PAGEALLOC
/* A special kernel TSB for 4MB and 256MB linear mappings. */
struct tsb swapper_4m_tsb[KERNEL_TSB4M_NENTRIES];
+#endif
#define MAX_BANKS 32
}
/* Don't mark as init, we give this to the Hypervisor. */
-static struct hv_tsb_descr ktsb_descr[2];
+#ifndef CONFIG_DEBUG_PAGEALLOC
+#define NUM_KTSB_DESCR 2
+#else
+#define NUM_KTSB_DESCR 1
+#endif
+static struct hv_tsb_descr ktsb_descr[NUM_KTSB_DESCR];
extern struct tsb swapper_tsb[KERNEL_TSB_NENTRIES];
static void __init sun4v_ktsb_init(void)
ktsb_descr[0].tsb_base = ktsb_pa;
ktsb_descr[0].resv = 0;
+#ifndef CONFIG_DEBUG_PAGEALLOC
/* Second KTSB for 4MB/256MB mappings. */
ktsb_pa = (kern_base +
((unsigned long)&swapper_4m_tsb[0] - KERNBASE));
ktsb_descr[1].ctx_idx = 0;
ktsb_descr[1].tsb_base = ktsb_pa;
ktsb_descr[1].resv = 0;
+#endif
}
void __cpuinit sun4v_ktsb_register(void)
pa = kern_base + ((unsigned long)&ktsb_descr[0] - KERNBASE);
func = HV_FAST_MMU_TSB_CTX0;
- arg0 = 2;
+ arg0 = NUM_KTSB_DESCR;
arg1 = pa;
__asm__ __volatile__("ta %6"
: "=&r" (func), "=&r" (arg0), "=&r" (arg1)
/* Invalidate both kernel TSBs. */
memset(swapper_tsb, 0x40, sizeof(swapper_tsb));
+#ifndef CONFIG_DEBUG_PAGEALLOC
memset(swapper_4m_tsb, 0x40, sizeof(swapper_4m_tsb));
+#endif
if (tlb_type == hypervisor)
sun4v_pgprot_init();
pg_iobits = (_PAGE_VALID | _PAGE_PRESENT_4U | __DIRTY_BITS_4U |
__ACCESS_BITS_4U | _PAGE_E_4U);
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ kern_linear_pte_xor[0] = (_PAGE_VALID | _PAGE_SZBITS_4U) ^
+ 0xfffff80000000000;
+#else
kern_linear_pte_xor[0] = (_PAGE_VALID | _PAGE_SZ4MB_4U) ^
0xfffff80000000000;
+#endif
kern_linear_pte_xor[0] |= (_PAGE_CP_4U | _PAGE_CV_4U |
_PAGE_P_4U | _PAGE_W_4U);
_PAGE_E = _PAGE_E_4V;
_PAGE_CACHE = _PAGE_CACHE_4V;
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ kern_linear_pte_xor[0] = (_PAGE_VALID | _PAGE_SZBITS_4V) ^
+ 0xfffff80000000000;
+#else
kern_linear_pte_xor[0] = (_PAGE_VALID | _PAGE_SZ4MB_4V) ^
0xfffff80000000000;
+#endif
kern_linear_pte_xor[0] |= (_PAGE_CP_4V | _PAGE_CV_4V |
_PAGE_P_4V | _PAGE_W_4V);
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ kern_linear_pte_xor[1] = (_PAGE_VALID | _PAGE_SZBITS_4V) ^
+ 0xfffff80000000000;
+#else
kern_linear_pte_xor[1] = (_PAGE_VALID | _PAGE_SZ256MB_4V) ^
0xfffff80000000000;
+#endif
kern_linear_pte_xor[1] |= (_PAGE_CP_4V | _PAGE_CV_4V |
_PAGE_P_4V | _PAGE_W_4V);
{
switch (id) {
case SOLARIS_CONFIG_NGROUPS: return NGROUPS_MAX;
- case SOLARIS_CONFIG_CHILD_MAX: return -1; /* no limit */
- case SOLARIS_CONFIG_OPEN_FILES: return OPEN_MAX;
+ case SOLARIS_CONFIG_CHILD_MAX:
+ return current->signal->rlim[RLIMIT_NPROC].rlim_cur;
+ case SOLARIS_CONFIG_OPEN_FILES:
+ return current->signal->rlim[RLIMIT_NOFILE].rlim_cur;
case SOLARIS_CONFIG_POSIX_VER: return 199309;
case SOLARIS_CONFIG_PAGESIZE: return PAGE_SIZE;
case SOLARIS_CONFIG_XOPEN_VER: return 3;
config STACKTRACE_SUPPORT
bool
- default y
+ default n
config GENERIC_CALIBRATE_DELAY
bool
struct chan *chan;
LIST_HEAD(list);
struct list_head *ele;
+ unsigned long flags;
- spin_lock_irq(&irqs_to_free_lock);
+ spin_lock_irqsave(&irqs_to_free_lock, flags);
list_splice_init(&irqs_to_free, &list);
- INIT_LIST_HEAD(&irqs_to_free);
- spin_unlock_irq(&irqs_to_free_lock);
+ spin_unlock_irqrestore(&irqs_to_free_lock, flags);
list_for_each(ele, &list){
chan = list_entry(ele, struct chan, free_list);
static void close_one_chan(struct chan *chan, int delay_free_irq)
{
+ unsigned long flags;
+
if(!chan->opened)
return;
if(delay_free_irq){
- spin_lock_irq(&irqs_to_free_lock);
+ spin_lock_irqsave(&irqs_to_free_lock, flags);
list_add(&chan->free_list, &irqs_to_free);
- spin_unlock_irq(&irqs_to_free_lock);
+ spin_unlock_irqrestore(&irqs_to_free_lock, flags);
}
else {
if(chan->input)
err_msg = NULL;
err = (*dev->remove)(n, &err_msg);
switch(err){
+ case 0:
+ err_msg = "";
+ break;
case -ENODEV:
if(err_msg == NULL)
err_msg = "Device doesn't exist";
static DEFINE_MUTEX(ubd_lock);
-/* XXX - this made sense in 2.4 days, now it's only used as a boolean, and
- * probably it doesn't make sense even for that. */
-static int do_ubd;
-
static int ubd_open(struct inode * inode, struct file * filp);
static int ubd_release(struct inode * inode, struct file * file);
static int ubd_ioctl(struct inode * inode, struct file * file,
struct platform_device pdev;
struct request_queue *queue;
spinlock_t lock;
+ int active;
};
#define DEFAULT_COW { \
.shared = 0, \
.cow = DEFAULT_COW, \
.lock = SPIN_LOCK_UNLOCKED, \
+ .active = 0, \
}
/* Protected by ubd_lock */
struct ubd *dev;
int n;
- do_ubd = 0;
n = os_read_file(thread_fd, &req, sizeof(req));
if(n != sizeof(req)){
printk(KERN_ERR "Pid %d - spurious interrupt in ubd_handler, "
rq = req.req;
dev = rq->rq_disk->private_data;
+ dev->active = 0;
ubd_finish(rq, req.error);
reactivate_fd(thread_fd, UBD_IRQ);
}
}
else {
- if(do_ubd || (req = elv_next_request(q)) == NULL)
+ struct ubd *dev = q->queuedata;
+ if(dev->active || (req = elv_next_request(q)) == NULL)
return;
err = prepare_request(req, &io_req);
if(!err){
- do_ubd = 1;
+ dev->active = 1;
n = os_write_file(thread_fd, (char *) &io_req,
sizeof(io_req));
if(n != sizeof(io_req))
#define u32 uint32_t
#endif
+#include "sysdep/ptrace.h"
+
#define MCONSOLE_MAGIC (0xcafebabe)
#define MCONSOLE_MAX_DATA (512)
#define MCONSOLE_VERSION 2
#endif
#ifdef UML_CONFIG_MODE_SKAS
struct skas_regs {
- /* x86_64 ptrace uses sizeof(user_regs_struct) as its register
- * file size, while i386 uses FRAME_SIZE. Therefore, we need
- * to use UM_FRAME_SIZE here instead of HOST_FRAME_SIZE.
- */
unsigned long regs[MAX_REG_NR];
unsigned long fp[HOST_FP_SIZE];
struct faultinfo faultinfo;
void mem_init(void)
{
- max_low_pfn = (high_physmem - uml_physmem) >> PAGE_SHIFT;
-
/* clear the zero-page */
memset((void *) empty_zero_page, 0, PAGE_SIZE);
/* this will put all low memory onto the freelists */
totalram_pages = free_all_bootmem();
+ max_low_pfn = totalram_pages;
#ifdef CONFIG_HIGHMEM
totalhigh_pages = highmem >> PAGE_SHIFT;
totalram_pages += totalhigh_pages;
static inline long do_syscall_stub(struct mm_id * mm_idp, void **addr)
{
unsigned long regs[MAX_REG_NR];
- int n;
+ int n, i;
long ret, offset;
unsigned long * data;
unsigned long * syscall;
(unsigned long) &__syscall_stub_start);
n = ptrace_setregs(pid, regs);
- if(n < 0)
+ if(n < 0){
+ printk("Registers - \n");
+ for(i = 0; i < MAX_REG_NR; i++)
+ printk("\t%d\t0x%lx\n", i, regs[i]);
panic("do_syscall_stub : PTRACE_SETREGS failed, errno = %d\n",
- n);
+ -n);
+ }
wait_stub_done(pid, 0, "do_syscall_stub");
if((n < 0) || !WIFSTOPPED(status) ||
(WSTOPSIG(status) != SIGUSR1 && WSTOPSIG(status) != SIGTRAP)){
- unsigned long regs[HOST_FRAME_SIZE];
+ unsigned long regs[MAX_REG_NR];
if(ptrace(PTRACE_GETREGS, pid, 0, regs) < 0)
printk("Failed to get registers from stub, "
int i;
printk("Stub registers -\n");
- for(i = 0; i < HOST_FRAME_SIZE; i++)
+ for(i = 0; i < ARRAY_SIZE(regs); i++)
printk("\t%d - %lx\n", i, regs[i]);
}
panic("%s : failed to wait for SIGUSR1/SIGTRAP, "
int copy_context_skas0(unsigned long new_stack, int pid)
{
int err;
- unsigned long regs[HOST_FRAME_SIZE];
+ unsigned long regs[MAX_REG_NR];
unsigned long fp_regs[HOST_FP_SIZE];
unsigned long current_stack = current_stub_stack();
struct stub_data *data = (struct stub_data *) current_stack;
/* These are set once at boot time and not changed thereafter */
-static unsigned long exec_regs[HOST_FRAME_SIZE];
+static unsigned long exec_regs[MAX_REG_NR];
static unsigned long exec_fp_regs[HOST_FP_SIZE];
static unsigned long exec_fpx_regs[HOST_XFP_SIZE];
static int have_fpx_regs = 1;
{
int err;
+ memset(exec_regs, 0, sizeof(exec_regs));
err = ptrace(PTRACE_GETREGS, pid, 0, exec_regs);
if(err)
panic("check_ptrace : PTRACE_GETREGS failed, errno = %d",
void get_safe_registers(unsigned long *regs, unsigned long *fp_regs)
{
- memcpy(regs, exec_regs, HOST_FRAME_SIZE * sizeof(unsigned long));
+ memcpy(regs, exec_regs, sizeof(exec_regs));
if(fp_regs != NULL)
memcpy(fp_regs, exec_fp_regs,
HOST_FP_SIZE * sizeof(unsigned long));
/* These are set once at boot time and not changed thereafter */
-static unsigned long exec_regs[HOST_FRAME_SIZE];
+static unsigned long exec_regs[MAX_REG_NR];
static unsigned long exec_fp_regs[HOST_FP_SIZE];
void init_thread_registers(union uml_pt_regs *to)
void get_safe_registers(unsigned long *regs, unsigned long *fp_regs)
{
- memcpy(regs, exec_regs, HOST_FRAME_SIZE * sizeof(unsigned long));
+ memcpy(regs, exec_regs, sizeof(exec_regs));
if(fp_regs != NULL)
memcpy(fp_regs, exec_fp_regs,
HOST_FP_SIZE * sizeof(unsigned long));
$(USER_OBJS:.o=.%): \
c_flags = -Wp,-MD,$(depfile) $(USER_CFLAGS) $(CFLAGS_$(basetarget).o)
$(USER_OBJS) : CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ \
- -Dunix -D__unix__ -D__$(SUBARCH)__
+ -Dunix -D__unix__ -D__$(SUBARCH)__ $(CF)
# These are like USER_OBJS but filter USER_CFLAGS through unprofile instead of
# using it directly.
$(UNPROFILE_OBJS:.o=.%): \
c_flags = -Wp,-MD,$(depfile) $(call unprofile,$(USER_CFLAGS)) $(CFLAGS_$(basetarget).o)
$(UNPROFILE_OBJS) : CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ \
- -Dunix -D__unix__ -D__$(SUBARCH)__
+ -Dunix -D__unix__ -D__$(SUBARCH)__ $(CF)
# The stubs and unmap.o can't try to call mcount or update basic block data
define unprofile
}
EXPORT_SYMBOL(__udelay);
-
-void __const_udelay(unsigned long usecs)
-{
- int i, n;
-
- n = (loops_per_jiffy * HZ * usecs) / MILLION;
- for(i=0;i<n;i++)
- cpu_relax();
-}
-
-EXPORT_SYMBOL(__const_udelay);
static void ldt_get_host_info(void)
{
long ret;
- struct ldt_entry * ldt, *tmp;
+ struct ldt_entry * ldt;
+ short *tmp;
int i, size, k, order;
spin_lock(&host_ldt_lock);
}
EXPORT_SYMBOL(__udelay);
-
-void __const_udelay(unsigned long usecs)
-{
- unsigned long i, n;
-
- n = (loops_per_jiffy * HZ * usecs) / MILLION;
- for(i=0;i<n;i++)
- cpu_relax();
-}
-
-EXPORT_SYMBOL(__const_udelay);
jmp _m_s
check_vesa:
+#ifdef CONFIG_FIRMWARE_EDID
+ leaw modelist+1024, %di
+ movw $0x4f00, %ax
+ int $0x10
+ cmpw $0x004f, %ax
+ jnz setbad
+
+ movw 4(%di), %ax
+ movw %ax, vbe_version
+#endif
leaw modelist+1024, %di
subb $VIDEO_FIRST_VESA>>8, %bh
movw %bx, %cx # Get mode information structure
rep
stosl
+ cmpw $0x0200, vbe_version # only do EDID on >= VBE2.0
+ jl no_edid
+
pushw %es # save ES
xorw %di, %di # Report Capability
pushw %di
svga_prefix: .byte VIDEO_FIRST_BIOS>>8 # Default prefix for BIOS modes
graphic_mode: .byte 0 # Graphic mode with a linear frame buffer
dac_size: .byte 6 # DAC bit depth
+vbe_version: .word 0 # VBE bios version
# Status messages
keymsg: .ascii "Press <RETURN> to see video modes available, "
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.20-git8
-# Tue Feb 13 11:25:16 2007
+# Linux kernel version: 2.6.21-rc3
+# Wed Mar 7 15:29:47 2007
#
CONFIG_X86_64=y
CONFIG_64BIT=y
CONFIG_X86=y
+CONFIG_GENERIC_TIME=y
+CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_ZONE_DMA32=y
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
# CONFIG_IPC_NS is not set
+CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set
# CONFIG_CPUSETS is not set
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_RELAY is not set
+CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
# CONFIG_X86_VSMP is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
-CONFIG_MCORE2=y
-# CONFIG_GENERIC_CPU is not set
-CONFIG_X86_L1_CACHE_BYTES=64
-CONFIG_X86_L1_CACHE_SHIFT=6
-CONFIG_X86_INTERNODE_CACHE_BYTES=64
+# CONFIG_MCORE2 is not set
+CONFIG_GENERIC_CPU=y
+CONFIG_X86_L1_CACHE_BYTES=128
+CONFIG_X86_L1_CACHE_SHIFT=7
+CONFIG_X86_INTERNODE_CACHE_BYTES=128
CONFIG_X86_TSC=y
CONFIG_X86_GOOD_APIC=y
# CONFIG_MICROCODE is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
-# CONFIG_ACPI_HOTKEY is not set
CONFIG_ACPI_FAN=y
# CONFIG_ACPI_DOCK is not set
-# CONFIG_ACPI_BAY is not set
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_THERMAL=y
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
-# CONFIG_INET_TUNNEL is not set
+CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
#
# Plug and Play support
#
-# CONFIG_PNP is not set
+CONFIG_PNP=y
+# CONFIG_PNP_DEBUG is not set
+
+#
+# Protocols
+#
+CONFIG_PNPACPI=y
#
# Block devices
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_RAM_BLOCKSIZE=1024
-CONFIG_BLK_DEV_INITRD=y
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_IBM_ASM is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
+# CONFIG_SONY_LAPTOP is not set
#
# ATA/ATAPI/MFM/RLL support
#
CONFIG_IDE_GENERIC=y
# CONFIG_BLK_DEV_CMD640 is not set
+# CONFIG_BLK_DEV_IDEPNP is not set
CONFIG_BLK_DEV_IDEPCI=y
# CONFIG_IDEPCI_SHARE_IRQ is not set
# CONFIG_BLK_DEV_OFFBOARD is not set
# CONFIG_SATA_VITESSE is not set
# CONFIG_SATA_INIC162X is not set
CONFIG_SATA_INTEL_COMBINED=y
+CONFIG_SATA_ACPI=y
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
CONFIG_TUN=y
+# CONFIG_NET_SB1000 is not set
#
# ARCnet devices
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PASEMI is not set
# CONFIG_I2C_PROSAVAGE is not set
# CONFIG_I2C_SAVAGE4 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_HWMON_DEBUG_CHIP is not set
+#
+# Multifunction device drivers
+#
+# CONFIG_MFD_SM501 is not set
+
#
# Multimedia devices
#
#
# Graphics support
#
-# CONFIG_FIRMWARE_EDID is not set
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
# CONFIG_FB is not set
#
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=256
CONFIG_VIDEO_SELECT=y
CONFIG_DUMMY_CONSOLE=y
-# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
# Open Sound System
#
CONFIG_SOUND_PRIME=y
-CONFIG_OBSOLETE_OSS=y
+# CONFIG_OBSOLETE_OSS is not set
# CONFIG_SOUND_BT878 is not set
-# CONFIG_SOUND_ES1371 is not set
CONFIG_SOUND_ICH=y
# CONFIG_SOUND_TRIDENT is not set
# CONFIG_SOUND_MSNDCLAS is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
+# CONFIG_USB_BERRY_CHARGE is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
+# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
#
CONFIG_LOG_BUF_SHIFT=18
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
+# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_FORCED_INLINING is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_LKDTM is not set
+# CONFIG_FAULT_INJECTION is not set
# CONFIG_DEBUG_RODATA is not set
# CONFIG_IOMMU_DEBUG is not set
CONFIG_DEBUG_STACKOVERFLOW=y
.quad sys_sched_yield
.quad sys_sched_get_priority_max
.quad sys_sched_get_priority_min /* 160 */
- .quad sys_sched_rr_get_interval
+ .quad sys32_sched_rr_get_interval
.quad compat_sys_nanosleep
.quad sys_mremap
.quad sys_setresuid16
{
pgd_t *slot0 = pgd_offset(current->mm, 0UL);
low_ptr = *slot0;
+ /* FIXME: We're playing with the current task's page tables here, which
+ * is potentially dangerous on SMP systems.
+ */
set_pgd(slot0, *pgd_offset(current->mm, PAGE_OFFSET));
- WARN_ON(num_online_cpus() != 1);
local_flush_tlb();
}
int disable_apic_timer __initdata;
+/* Local APIC timer works in C2? */
+int local_apic_timer_c2_ok;
+EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
+
static struct resource *ioapic_resources;
static struct resource lapic_resource = {
.name = "Local APIC",
void smp_send_timer_broadcast_ipi(void)
{
+ int cpu = smp_processor_id();
cpumask_t mask;
cpus_and(mask, cpu_online_map, timer_interrupt_broadcast_ipi_mask);
+
+ if (cpu_isset(cpu, mask)) {
+ cpu_clear(cpu, mask);
+ add_pda(apic_timer_irqs, 1);
+ smp_local_timer_interrupt();
+ }
+
if (!cpus_empty(mask)) {
send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
}
}
early_param("nolapic", setup_nolapic);
+static int __init parse_lapic_timer_c2_ok(char *arg)
+{
+ local_apic_timer_c2_ok = 1;
+ return 0;
+}
+early_param("lapic_timer_c2_ok", parse_lapic_timer_c2_ok);
+
static __init int setup_noapictimer(char *str)
{
if (str[0] != ' ' && str[0] != 0)
config X86_P4_CLOCKMOD
tristate "Intel Pentium 4 clock modulation"
depends on EMBEDDED
+ select CPU_FREQ_TABLE
help
This adds the clock modulation driver for Intel Pentium 4 / XEON
processors. When enabled it will lower CPU temperature by skipping
}
early_param("memmap", parse_memmap_opt);
-void finish_e820_parsing(void)
+void __init finish_e820_parsing(void)
{
if (userdef) {
printk(KERN_INFO "user-defined physical RAM map:\n");
#include <asm/proto.h>
#include <asm/dma.h>
-static void via_bugs(void)
+static void __init via_bugs(void)
{
#ifdef CONFIG_IOMMU
if ((end_pfn > MAX_DMA32_PFN || force_iommu) &&
}
#endif
-static void nvidia_bugs(void)
+static void __init nvidia_bugs(void)
{
#ifdef CONFIG_ACPI
/*
}
-static void ati_bugs(void)
+static void __init ati_bugs(void)
{
if (timer_over_8254 == 1) {
timer_over_8254 = 0;
void (*f)(void);
};
-static struct chipset early_qrk[] = {
+static struct chipset early_qrk[] __initdata = {
{ PCI_VENDOR_ID_NVIDIA, nvidia_bugs },
{ PCI_VENDOR_ID_VIA, via_bugs },
{ PCI_VENDOR_ID_ATI, ati_bugs },
*(.text.dentry_open)
*(.text.dentry_iput)
*(.text.bio_alloc)
-*(.text.alloc_skb_from_cache)
*(.text.wait_on_page_bit)
*(.text.vfs_readdir)
*(.text.vfs_lstat)
#define TICK_COUNT 100000000
#define TICK_MIN 5000
+#define MAX_TRIES 5
/*
* Some platforms take periodic SMI interrupts with 5ms duration. Make sure none
*/
static void __init read_hpet_tsc(int *hpet, int *tsc)
{
- int tsc1, tsc2, hpet1;
+ int tsc1, tsc2, hpet1, i;
- do {
+ for (i = 0; i < MAX_TRIES; i++) {
tsc1 = get_cycles_sync();
hpet1 = hpet_readl(HPET_COUNTER);
tsc2 = get_cycles_sync();
- } while (tsc2 - tsc1 > TICK_MIN);
+ if (tsc2 - tsc1 > TICK_MIN)
+ break;
+ }
*hpet = hpet1;
*tsc = tsc2;
}
/*
* ISA PIC or low IO-APIC triggered (INTA-cycle or APIC) interrupts:
- * (these are usually mapped to vectors 0x20-0x2f)
+ * (these are usually mapped to vectors 0x30-0x3f)
*/
/*
* outb_p - this has to work on a wide range of PC hardware.
*/
outb_p(0x11, 0x20); /* ICW1: select 8259A-1 init */
- outb_p(IRQ0_VECTOR, 0x21); /* ICW2: 8259A-1 IR0-7 mapped to 0x20-0x27 */
+ outb_p(IRQ0_VECTOR, 0x21); /* ICW2: 8259A-1 IR0-7 mapped to 0x30-0x37 */
outb_p(0x04, 0x21); /* 8259A-1 (the master) has a slave on IR2 */
if (auto_eoi)
outb_p(0x03, 0x21); /* master does Auto EOI */
outb_p(0x01, 0x21); /* master expects normal EOI */
outb_p(0x11, 0xA0); /* ICW1: select 8259A-2 init */
- outb_p(IRQ8_VECTOR, 0xA1); /* ICW2: 8259A-2 IR0-7 mapped to 0x28-0x2f */
+ outb_p(IRQ8_VECTOR, 0xA1); /* ICW2: 8259A-2 IR0-7 mapped to 0x38-0x3f */
outb_p(0x02, 0xA1); /* 8259A-2 is a slave on master's IR2 */
outb_p(0x01, 0xA1); /* (slave's support for AEOI in flat mode
is to be investigated) */
dev = NULL;
i = 0;
while ((dev = next_k8_northbridge(dev)) != NULL) {
- k8_northbridges[i++] = dev;
- pci_read_config_dword(dev, 0x9c, &flush_words[i]);
+ k8_northbridges[i] = dev;
+ pci_read_config_dword(dev, 0x9c, &flush_words[i++]);
}
k8_northbridges[i] = NULL;
return 0;
/* Processor that is doing the boot up */
unsigned int boot_cpu_id = -1U;
/* Internal processor count */
-unsigned int num_processors __initdata = 0;
+unsigned int num_processors __cpuinitdata = 0;
-unsigned disabled_cpus __initdata;
+unsigned disabled_cpus __cpuinitdata;
/* Bitmask of physically existing CPUs */
physid_mask_t phys_cpu_present_map = PHYSID_MASK_NONE;
* different subsystems this reservation system just tries to coordinate
* things a little
*/
-static DEFINE_PER_CPU(unsigned, perfctr_nmi_owner);
-static DEFINE_PER_CPU(unsigned, evntsel_nmi_owner[2]);
-
-static cpumask_t backtrace_mask = CPU_MASK_NONE;
/* this number is calculated from Intel's MSR_P4_CRU_ESCR5 register and it's
* offset from MSR_P4_BSU_ESCR0. It will be the max for all platforms (for now)
*/
#define NMI_MAX_COUNTER_BITS 66
+#define NMI_MAX_COUNTER_LONGS BITS_TO_LONGS(NMI_MAX_COUNTER_BITS)
+
+static DEFINE_PER_CPU(unsigned, perfctr_nmi_owner[NMI_MAX_COUNTER_LONGS]);
+static DEFINE_PER_CPU(unsigned, evntsel_nmi_owner[NMI_MAX_COUNTER_LONGS]);
+
+static cpumask_t backtrace_mask = CPU_MASK_NONE;
/* nmi_active:
* >0: the lapic NMI watchdog is active, but can be disabled
/* checks for a bit availability (hack for oprofile) */
int avail_to_resrv_perfctr_nmi_bit(unsigned int counter)
{
+ int cpu;
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
-
- return (!test_bit(counter, &__get_cpu_var(perfctr_nmi_owner)));
+ for_each_possible_cpu (cpu) {
+ if (test_bit(counter, &per_cpu(perfctr_nmi_owner, cpu)))
+ return 0;
+ }
+ return 1;
}
/* checks the an msr for availability */
int avail_to_resrv_perfctr_nmi(unsigned int msr)
{
unsigned int counter;
+ int cpu;
counter = nmi_perfctr_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- return (!test_bit(counter, &__get_cpu_var(perfctr_nmi_owner)));
+ for_each_possible_cpu (cpu) {
+ if (test_bit(counter, &per_cpu(perfctr_nmi_owner, cpu)))
+ return 0;
+ }
+ return 1;
}
-int reserve_perfctr_nmi(unsigned int msr)
+static int __reserve_perfctr_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_perfctr_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- if (!test_and_set_bit(counter, &__get_cpu_var(perfctr_nmi_owner)))
+ if (!test_and_set_bit(counter, &per_cpu(perfctr_nmi_owner, cpu)))
return 1;
return 0;
}
-void release_perfctr_nmi(unsigned int msr)
+static void __release_perfctr_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_perfctr_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- clear_bit(counter, &__get_cpu_var(perfctr_nmi_owner));
+ clear_bit(counter, &per_cpu(perfctr_nmi_owner, cpu));
}
-int reserve_evntsel_nmi(unsigned int msr)
+int reserve_perfctr_nmi(unsigned int msr)
+{
+ int cpu, i;
+ for_each_possible_cpu (cpu) {
+ if (!__reserve_perfctr_nmi(cpu, msr)) {
+ for_each_possible_cpu (i) {
+ if (i >= cpu)
+ break;
+ __release_perfctr_nmi(i, msr);
+ }
+ return 0;
+ }
+ }
+ return 1;
+}
+
+void release_perfctr_nmi(unsigned int msr)
+{
+ int cpu;
+ for_each_possible_cpu (cpu)
+ __release_perfctr_nmi(cpu, msr);
+}
+
+int __reserve_evntsel_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_evntsel_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- if (!test_and_set_bit(counter, &__get_cpu_var(evntsel_nmi_owner)))
+ if (!test_and_set_bit(counter, &per_cpu(evntsel_nmi_owner, cpu)[0]))
return 1;
return 0;
}
-void release_evntsel_nmi(unsigned int msr)
+static void __release_evntsel_nmi(int cpu, unsigned int msr)
{
unsigned int counter;
+ if (cpu < 0)
+ cpu = smp_processor_id();
counter = nmi_evntsel_msr_to_bit(msr);
BUG_ON(counter > NMI_MAX_COUNTER_BITS);
- clear_bit(counter, &__get_cpu_var(evntsel_nmi_owner));
+ clear_bit(counter, &per_cpu(evntsel_nmi_owner, cpu)[0]);
+}
+
+int reserve_evntsel_nmi(unsigned int msr)
+{
+ int cpu, i;
+ for_each_possible_cpu (cpu) {
+ if (!__reserve_evntsel_nmi(cpu, msr)) {
+ for_each_possible_cpu (i) {
+ if (i >= cpu)
+ break;
+ __release_evntsel_nmi(i, msr);
+ }
+ return 0;
+ }
+ }
+ return 1;
+}
+
+void release_evntsel_nmi(unsigned int msr)
+{
+ int cpu;
+ for_each_possible_cpu (cpu) {
+ __release_evntsel_nmi(cpu, msr);
+ }
}
static __cpuinit inline int nmi_known_cpu(void)
{
if (nmi_watchdog != NMI_DEFAULT)
return;
- if (nmi_known_cpu())
- nmi_watchdog = NMI_LOCAL_APIC;
- else
- nmi_watchdog = NMI_IO_APIC;
+ nmi_watchdog = NMI_NONE;
}
static int endflag __initdata = 0;
for (cpu = 0; cpu < NR_CPUS; cpu++)
counts[cpu] = cpu_pda(cpu)->__nmi_count;
local_irq_enable();
- mdelay((10*1000)/nmi_hz); // wait 10 ticks
+ mdelay((20*1000)/nmi_hz); // wait 20 ticks
for_each_online_cpu(cpu) {
if (!per_cpu(nmi_watchdog_ctlblk, cpu).enabled)
perfctr_msr = MSR_K7_PERFCTR0;
evntsel_msr = MSR_K7_EVNTSEL0;
- if (!reserve_perfctr_nmi(perfctr_msr))
+ if (!__reserve_perfctr_nmi(-1, perfctr_msr))
goto fail;
- if (!reserve_evntsel_nmi(evntsel_msr))
+ if (!__reserve_evntsel_nmi(-1, evntsel_msr))
goto fail1;
/* Simulator may not support it */
wd->check_bit = 1ULL<<63;
return 1;
fail2:
- release_evntsel_nmi(evntsel_msr);
+ __release_evntsel_nmi(-1, evntsel_msr);
fail1:
- release_perfctr_nmi(perfctr_msr);
+ __release_perfctr_nmi(-1, perfctr_msr);
fail:
return 0;
}
wrmsr(wd->evntsel_msr, 0, 0);
- release_evntsel_nmi(wd->evntsel_msr);
- release_perfctr_nmi(wd->perfctr_msr);
+ __release_evntsel_nmi(-1, wd->evntsel_msr);
+ __release_perfctr_nmi(-1, wd->perfctr_msr);
}
/* Note that these events don't tick when the CPU idles. This means
cccr_val = P4_CCCR_OVF_PMI1 | P4_CCCR_ESCR_SELECT(4);
}
- if (!reserve_perfctr_nmi(perfctr_msr))
+ if (!__reserve_perfctr_nmi(-1, perfctr_msr))
goto fail;
- if (!reserve_evntsel_nmi(evntsel_msr))
+ if (!__reserve_evntsel_nmi(-1, evntsel_msr))
goto fail1;
evntsel = P4_ESCR_EVENT_SELECT(0x3F)
wd->check_bit = 1ULL<<39;
return 1;
fail1:
- release_perfctr_nmi(perfctr_msr);
+ __release_perfctr_nmi(-1, perfctr_msr);
fail:
return 0;
}
wrmsr(wd->cccr_msr, 0, 0);
wrmsr(wd->evntsel_msr, 0, 0);
- release_evntsel_nmi(wd->evntsel_msr);
- release_perfctr_nmi(wd->perfctr_msr);
+ __release_evntsel_nmi(-1, wd->evntsel_msr);
+ __release_perfctr_nmi(-1, wd->perfctr_msr);
}
#define ARCH_PERFMON_NMI_EVENT_SEL ARCH_PERFMON_UNHALTED_CORE_CYCLES_SEL
perfctr_msr = MSR_ARCH_PERFMON_PERFCTR0;
evntsel_msr = MSR_ARCH_PERFMON_EVENTSEL0;
- if (!reserve_perfctr_nmi(perfctr_msr))
+ if (!__reserve_perfctr_nmi(-1, perfctr_msr))
goto fail;
- if (!reserve_evntsel_nmi(evntsel_msr))
+ if (!__reserve_evntsel_nmi(-1, evntsel_msr))
goto fail1;
wrmsrl(perfctr_msr, 0UL);
wd->check_bit = 1ULL << (eax.split.bit_width - 1);
return 1;
fail1:
- release_perfctr_nmi(perfctr_msr);
+ __release_perfctr_nmi(-1, perfctr_msr);
fail:
return 0;
}
wrmsr(wd->evntsel_msr, 0, 0);
- release_evntsel_nmi(wd->evntsel_msr);
- release_perfctr_nmi(wd->perfctr_msr);
+ __release_evntsel_nmi(-1, wd->evntsel_msr);
+ __release_perfctr_nmi(-1, wd->perfctr_msr);
}
void setup_apic_nmi_watchdog(void *unused)
gatt_size = (aper_size >> PAGE_SHIFT) * sizeof(u32);
gatt = (void *)__get_free_pages(GFP_KERNEL, get_order(gatt_size));
if (!gatt)
- panic("Cannot allocate GATT table");
+ panic("Cannot allocate GATT table");
+ if (change_page_attr_addr((unsigned long)gatt, gatt_size >> PAGE_SHIFT, PAGE_KERNEL_NOCACHE))
+ panic("Could not set GART PTEs to uncacheable pages");
+ global_flush_tlb();
+
memset(gatt, 0, gatt_size);
agp_gatt_table = gatt;
dma_ops = &gart_dma_ops;
}
-void gart_parse_options(char *p)
+void __init gart_parse_options(char *p)
{
int arg;
void flush_thread(void)
{
struct task_struct *tsk = current;
- struct thread_info *t = current_thread_info();
- if (t->flags & _TIF_ABI_PENDING) {
- t->flags ^= (_TIF_ABI_PENDING | _TIF_IA32);
- if (t->flags & _TIF_IA32)
+ if (test_tsk_thread_flag(tsk, TIF_ABI_PENDING)) {
+ clear_tsk_thread_flag(tsk, TIF_ABI_PENDING);
+ if (test_tsk_thread_flag(tsk, TIF_IA32)) {
+ clear_tsk_thread_flag(tsk, TIF_IA32);
+ } else {
+ set_tsk_thread_flag(tsk, TIF_IA32);
current_thread_info()->status |= TS_COMPAT;
+ }
}
- t->flags &= ~_TIF_DEBUG;
+ clear_tsk_thread_flag(tsk, TIF_DEBUG);
tsk->thread.debugreg0 = 0;
tsk->thread.debugreg1 = 0;
OUTPUT_ARCH(i386:x86-64)
ENTRY(phys_startup_64)
jiffies_64 = jiffies;
-_proxy_pda = 0;
+_proxy_pda = 1;
PHDRS {
text PT_LOAD FLAGS(5); /* R_E */
data PT_LOAD FLAGS(7); /* RWE */
vread = __vsyscall_gtod_data.clock.vread;
if (unlikely(!__vsyscall_gtod_data.sysctl_enabled || !vread)) {
- gettimeofday(tv,0);
+ gettimeofday(tv,NULL);
return;
}
now = vread();
EXPORT_SYMBOL(init_level4_pgt);
EXPORT_SYMBOL(load_gs_index);
+EXPORT_SYMBOL(_proxy_pda);
void *adr = page_address(pg);
if (cpu_has_clflush)
cache_flush_page(adr);
- __flush_tlb_one(adr);
}
+ __flush_tlb_all();
}
static inline void flush_map(struct list_head *l)
if (!cfq_cfqq_on_rr(cfqq))
cfq_add_cfqq_rr(cfqd, cfqq);
+
+ /*
+ * check if this request is a better next-serve candidate
+ */
+ cfqq->next_rq = cfq_choose_req(cfqd, cfqq->next_rq, rq);
+ BUG_ON(!cfqq->next_rq);
}
static inline void
* expire an async queue immediately if it has used up its slice. idle
* queue always expire after 1 dispatch round.
*/
- if ((!cfq_cfqq_sync(cfqq) &&
+ if (cfqd->busy_queues > 1 && ((!cfq_cfqq_sync(cfqq) &&
cfqd->dispatch_slice >= cfq_prio_to_maxrq(cfqd, cfqq)) ||
- cfq_class_idle(cfqq)) {
+ cfq_class_idle(cfqq))) {
cfqq->slice_end = jiffies + 1;
cfq_slice_expired(cfqd, 0, 0);
}
while ((cfqq = cfq_select_queue(cfqd)) != NULL) {
int max_dispatch;
- /*
- * Don't repeat dispatch from the previous queue.
- */
- if (prev_cfqq == cfqq)
- break;
+ if (cfqd->busy_queues > 1) {
+ /*
+ * Don't repeat dispatch from the previous queue.
+ */
+ if (prev_cfqq == cfqq)
+ break;
- /*
- * So we have dispatched before in this round, if the
- * next queue has idling enabled (must be sync), don't
- * allow it service until the previous have continued.
- */
- if (cfqd->rq_in_driver && cfq_cfqq_idle_window(cfqq))
- break;
+ /*
+ * So we have dispatched before in this round, if the
+ * next queue has idling enabled (must be sync), don't
+ * allow it service until the previous have continued.
+ */
+ if (cfqd->rq_in_driver && cfq_cfqq_idle_window(cfqq))
+ break;
+ }
cfq_clear_cfqq_must_dispatch(cfqq);
cfq_clear_cfqq_wait_request(cfqq);
atomic_set(&cfqq->ref, 0);
cfqq->cfqd = cfqd;
- cfq_mark_cfqq_idle_window(cfqq);
+ if (key != CFQ_KEY_ASYNC)
+ cfq_mark_cfqq_idle_window(cfqq);
+
cfq_mark_cfqq_prio_changed(cfqq);
cfq_mark_cfqq_queue_new(cfqq);
cfq_init_prio_data(cfqq);
if (rq_is_meta(rq))
cfqq->meta_pending++;
- /*
- * check if this request is a better next-serve candidate)) {
- */
- cfqq->next_rq = cfq_choose_req(cfqd, cfqq->next_rq, rq);
- BUG_ON(!cfqq->next_rq);
-
/*
* we never wait for an async request and we don't allow preemption
* of an async request. so just return early
int elv_register(struct elevator_type *e)
{
+ char *def = "";
spin_lock_irq(&elv_list_lock);
BUG_ON(elevator_find(e->elevator_name));
list_add_tail(&e->list, &elv_list);
spin_unlock_irq(&elv_list_lock);
- printk(KERN_INFO "io scheduler %s registered", e->elevator_name);
if (!strcmp(e->elevator_name, chosen_elevator) ||
(!*chosen_elevator &&
!strcmp(e->elevator_name, CONFIG_DEFAULT_IOSCHED)))
- printk(" (default)");
- printk("\n");
+ def = " (default)";
+
+ printk(KERN_INFO "io scheduler %s registered%s\n", e->elevator_name, def);
return 0;
}
EXPORT_SYMBOL_GPL(elv_register);
/* temporary */
if (major == 0) {
for (index = ARRAY_SIZE(major_names)-1; index > 0; index--) {
- if (is_lanana_major(index))
- continue;
if (major_names[index] == NULL)
break;
}
* considered part of another segment, since that might
* change with the bounce page.
*/
- high = page_to_pfn(bv->bv_page) >= q->bounce_pfn;
+ high = page_to_pfn(bv->bv_page) > q->bounce_pfn;
if (high || highprv)
goto new_hw_segment;
if (cluster) {
open_softirq(BLOCK_SOFTIRQ, blk_done_softirq, NULL);
register_hotcpu_notifier(&blk_cpu_notifier);
- blk_max_low_pfn = max_low_pfn;
- blk_max_pfn = max_pfn;
+ blk_max_low_pfn = max_low_pfn - 1;
+ blk_max_pfn = max_pfn - 1;
return 0;
}
static void scatterwalk_pagedone(struct scatter_walk *walk, int out,
unsigned int more)
{
- if (out)
- flush_dcache_page(scatterwalk_page(walk));
+ if (out) {
+ struct page *page;
+
+ page = walk->sg->page + ((walk->offset - 1) >> PAGE_SHIFT);
+ flush_dcache_page(page);
+ }
if (more) {
walk->offset += PAGE_SIZE - 1;
memcpy_dir(buf, vaddr, len_this_page, out);
scatterwalk_unmap(vaddr, out);
+ scatterwalk_advance(walk, len_this_page);
+
if (nbytes == len_this_page)
break;
scatterwalk_pagedone(walk, out, 1);
}
-
- scatterwalk_advance(walk, nbytes);
}
EXPORT_SYMBOL_GPL(scatterwalk_copychunks);
tv = (void *)tvmem;
tfm = crypto_alloc_comp("deflate", 0, CRYPTO_ALG_ASYNC);
- if (tfm == NULL) {
+ if (IS_ERR(tfm)) {
printk("failed to load transform for deflate\n");
return;
}
notify_info->notify.value = (u16) notify_value;
notify_info->notify.handler_obj = handler_obj;
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
acpi_ev_notify_dispatch(notify_info);
- acpi_ex_reacquire_interpreter();
+ status = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
+
}
if (!handler_obj) {
acpi_gbl_global_lock_acquired = FALSE;
/* Release the local GL mutex */
- acpi_ev_global_lock_thread_id = 0;
+ acpi_ev_global_lock_thread_id = NULL;
acpi_ev_global_lock_acquired = 0;
acpi_os_release_mutex(acpi_gbl_global_lock_mutex);
return_ACPI_STATUS(status);
u32 bit_width, acpi_integer * value)
{
acpi_status status;
+ acpi_status status2;
acpi_adr_space_handler handler;
acpi_adr_space_setup region_setup;
union acpi_operand_object *handler_desc;
* setup will potentially execute control methods
* (e.g., _REG method for this region)
*/
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
status = region_setup(region_obj, ACPI_REGION_ACTIVATE,
handler_desc->address_space.context,
/* Re-enter the interpreter */
- acpi_ex_reacquire_interpreter();
+ status2 = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status2)) {
+ return_ACPI_STATUS(status2);
+ }
/* Check for failure of the Region Setup */
* exit the interpreter because the handler *might* block -- we don't
* know what it will do, so we can't hold the lock on the intepreter.
*/
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
}
/* Call the handler */
* We just returned from a non-default handler, we must re-enter the
* interpreter
*/
- acpi_ex_reacquire_interpreter();
+ status2 = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status2)) {
+ return_ACPI_STATUS(status2);
+ }
}
return_ACPI_STATUS(status);
return (AE_BAD_PARAMETER);
}
- /* Must lock interpreter to prevent race conditions */
+ status = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status)) {
+ return (status);
+ }
- acpi_ex_enter_interpreter();
status = acpi_ev_acquire_global_lock(timeout);
acpi_ex_exit_interpreter();
* Get the sync_level. If method is serialized, a mutex will be
* created for this method when it is parsed.
*/
- if (method_flags & AML_METHOD_SERIALIZED) {
+ if (acpi_gbl_all_methods_serialized) {
+ obj_desc->method.sync_level = 0;
+ obj_desc->method.method_flags |= AML_METHOD_SERIALIZED;
+ } else if (method_flags & AML_METHOD_SERIALIZED) {
/*
* ACPI 1.0: sync_level = 0
* ACPI 2.0: sync_level = sync_level in method declaration
acpi_status acpi_ex_system_wait_semaphore(acpi_semaphore semaphore, u16 timeout)
{
acpi_status status;
+ acpi_status status2;
ACPI_FUNCTION_TRACE(ex_system_wait_semaphore);
/* We must wait, so unlock the interpreter */
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
status = acpi_os_wait_semaphore(semaphore, 1, timeout);
/* Reacquire the interpreter */
- acpi_ex_reacquire_interpreter();
+ status2 = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status2)) {
+
+ /* Report fatal error, could not acquire interpreter */
+
+ return_ACPI_STATUS(status2);
+ }
}
return_ACPI_STATUS(status);
acpi_status acpi_ex_system_wait_mutex(acpi_mutex mutex, u16 timeout)
{
acpi_status status;
+ acpi_status status2;
ACPI_FUNCTION_TRACE(ex_system_wait_mutex);
/* We must wait, so unlock the interpreter */
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
status = acpi_os_acquire_mutex(mutex, timeout);
/* Reacquire the interpreter */
- acpi_ex_reacquire_interpreter();
+ status2 = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status2)) {
+
+ /* Report fatal error, could not acquire interpreter */
+
+ return_ACPI_STATUS(status2);
+ }
}
return_ACPI_STATUS(status);
acpi_status acpi_ex_system_do_suspend(acpi_integer how_long)
{
+ acpi_status status;
+
ACPI_FUNCTION_ENTRY();
/* Since this thread will sleep, we must release the interpreter */
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
acpi_os_sleep(how_long);
/* And now we must get the interpreter again */
- acpi_ex_reacquire_interpreter();
- return (AE_OK);
+ status = acpi_ex_enter_interpreter();
+ return (status);
}
/*******************************************************************************
*
* PARAMETERS: None
*
- * RETURN: None
+ * RETURN: Status
*
- * DESCRIPTION: Enter the interpreter execution region. Failure to enter
- * the interpreter region is a fatal system error. Used in
- * conjunction with exit_interpreter.
+ * DESCRIPTION: Enter the interpreter execution region. Failure to enter
+ * the interpreter region is a fatal system error
*
******************************************************************************/
-void acpi_ex_enter_interpreter(void)
+acpi_status acpi_ex_enter_interpreter(void)
{
acpi_status status;
status = acpi_ut_acquire_mutex(ACPI_MTX_INTERPRETER);
if (ACPI_FAILURE(status)) {
- ACPI_ERROR((AE_INFO,
- "Could not acquire AML Interpreter mutex"));
+ ACPI_ERROR((AE_INFO, "Could not acquire interpreter mutex"));
}
- return_VOID;
-}
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ex_reacquire_interpreter
- *
- * PARAMETERS: None
- *
- * RETURN: None
- *
- * DESCRIPTION: Reacquire the interpreter execution region from within the
- * interpreter code. Failure to enter the interpreter region is a
- * fatal system error. Used in conjuction with
- * relinquish_interpreter
- *
- ******************************************************************************/
-
-void acpi_ex_reacquire_interpreter(void)
-{
- ACPI_FUNCTION_TRACE(ex_reacquire_interpreter);
-
- /*
- * If the global serialized flag is set, do not release the interpreter,
- * since it was not actually released by acpi_ex_relinquish_interpreter.
- * This forces the interpreter to be single threaded.
- */
- if (!acpi_gbl_all_methods_serialized) {
- acpi_ex_enter_interpreter();
- }
-
- return_VOID;
+ return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* RETURN: None
*
- * DESCRIPTION: Exit the interpreter execution region. This is the top level
- * routine used to exit the interpreter when all processing has
- * been completed.
+ * DESCRIPTION: Exit the interpreter execution region
+ *
+ * Cases where the interpreter is unlocked:
+ * 1) Completion of the execution of a control method
+ * 2) Method blocked on a Sleep() AML opcode
+ * 3) Method blocked on an Acquire() AML opcode
+ * 4) Method blocked on a Wait() AML opcode
+ * 5) Method blocked to acquire the global lock
+ * 6) Method blocked to execute a serialized control method that is
+ * already executing
+ * 7) About to invoke a user-installed opregion handler
*
******************************************************************************/
status = acpi_ut_release_mutex(ACPI_MTX_INTERPRETER);
if (ACPI_FAILURE(status)) {
- ACPI_ERROR((AE_INFO,
- "Could not release AML Interpreter mutex"));
- }
-
- return_VOID;
-}
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ex_relinquish_interpreter
- *
- * PARAMETERS: None
- *
- * RETURN: None
- *
- * DESCRIPTION: Exit the interpreter execution region, from within the
- * interpreter - before attempting an operation that will possibly
- * block the running thread.
- *
- * Cases where the interpreter is unlocked internally
- * 1) Method to be blocked on a Sleep() AML opcode
- * 2) Method to be blocked on an Acquire() AML opcode
- * 3) Method to be blocked on a Wait() AML opcode
- * 4) Method to be blocked to acquire the global lock
- * 5) Method to be blocked waiting to execute a serialized control method
- * that is currently executing
- * 6) About to invoke a user-installed opregion handler
- *
- ******************************************************************************/
-
-void acpi_ex_relinquish_interpreter(void)
-{
- ACPI_FUNCTION_TRACE(ex_relinquish_interpreter);
-
- /*
- * If the global serialized flag is set, do not release the interpreter.
- * This forces the interpreter to be single threaded.
- */
- if (!acpi_gbl_all_methods_serialized) {
- acpi_ex_exit_interpreter();
+ ACPI_ERROR((AE_INFO, "Could not release interpreter mutex"));
}
return_VOID;
*
* RETURN: none
*
- * DESCRIPTION: Truncate an ACPI Integer to 32 bits if the execution mode is
- * 32-bit, as determined by the revision of the DSDT.
+ * DESCRIPTION: Truncate a number to 32-bits if the currently executing method
+ * belongs to a 32-bit ACPI table.
*
******************************************************************************/
/*
* 2) Enable all wakeup GPEs
*/
+ status = acpi_hw_disable_all_gpes();
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
+
acpi_gbl_system_awake_and_running = FALSE;
status = acpi_hw_enable_all_wakeup_gpes();
ret = acpi_bus_get_device(*ibm->handle, &ibm->device);
if (ret < 0) {
printk(IBM_ERR "%s device not present\n", ibm->name);
- return 0;
+ return -ENODEV;
}
acpi_driver_data(ibm->device) = ibm;
status = acpi_install_notify_handler(*ibm->handle, ibm->type,
dispatch_notify, ibm);
if (ACPI_FAILURE(status)) {
- printk(IBM_ERR "acpi_install_notify_handler(%s) failed: %d\n",
- ibm->name, status);
+ if (status == AE_ALREADY_EXISTS) {
+ printk(IBM_NOTICE "another device driver is already handling %s events\n",
+ ibm->name);
+ } else {
+ printk(IBM_ERR "acpi_install_notify_handler(%s) failed: %d\n",
+ ibm->name, status);
+ }
return -ENODEV;
}
ibm->notify_installed = 1;
return ret;
}
+static void ibm_exit(struct ibm_struct *ibm);
+
static int __init ibm_init(struct ibm_struct *ibm)
{
int ret;
if (ibm->notify) {
ret = setup_notify(ibm);
+ if (ret == -ENODEV) {
+ printk(IBM_NOTICE "disabling subdriver %s\n",
+ ibm->name);
+ ibm_exit(ibm);
+ return 0;
+ }
if (ret < 0)
return ret;
}
* Execute the method via the interpreter. The interpreter is locked
* here before calling into the AML parser
*/
- acpi_ex_enter_interpreter();
+ status = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
+
status = acpi_ps_execute_method(info);
acpi_ex_exit_interpreter();
} else {
* resolution, we must lock it because we could access an opregion.
* The opregion access code assumes that the interpreter is locked.
*/
- acpi_ex_enter_interpreter();
+ status = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
/* Function has a strange interface */
u32 level, void *context, void **return_value)
{
acpi_object_type type;
- acpi_status status = AE_OK;
+ acpi_status status;
struct acpi_init_walk_info *info =
(struct acpi_init_walk_info *)context;
struct acpi_namespace_node *node =
/*
* Must lock the interpreter before executing AML code
*/
- acpi_ex_enter_interpreter();
+ status = acpi_ex_enter_interpreter();
+ if (ACPI_FAILURE(status)) {
+ return (status);
+ }
/*
* Each of these types can contain executable AML code within the
struct acpi_buffer *return_buffer)
{
acpi_status status;
+ acpi_status status2;
struct acpi_evaluate_info *info;
acpi_size buffer_space_needed;
u32 i;
* Delete the internal return object. NOTE: Interpreter must be
* locked to avoid race condition.
*/
- acpi_ex_enter_interpreter();
+ status2 = acpi_ex_enter_interpreter();
+ if (ACPI_SUCCESS(status2)) {
- /* Remove one reference on the return object (should delete it) */
+ /* Remove one reference on the return object (should delete it) */
- acpi_ut_remove_reference(info->return_object);
- acpi_ex_exit_interpreter();
+ acpi_ut_remove_reference(info->return_object);
+ acpi_ex_exit_interpreter();
+ }
}
cleanup:
struct acpi_processor_cx *cx)
{
struct acpi_processor_power *pwr = &pr->power;
+ u8 type = local_apic_timer_c2_ok ? ACPI_STATE_C3 : ACPI_STATE_C2;
/*
* Check, if one of the previous states already marked the lapic
if (pwr->timer_broadcast_on_state < state)
return;
- if (cx->type >= ACPI_STATE_C2)
+ if (cx->type >= type)
pr->power.timer_broadcast_on_state = state;
}
static struct acpi_table_desc initial_tables[ACPI_MAX_TABLES] __initdata;
-void acpi_table_print_madt_entry(struct acpi_subtable_header * header)
+static int acpi_apic_instance __initdata;
+
+void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
{
if (!header)
return;
if (!handler)
return -EINVAL;
- /* Locate the table (if exists). There should only be one. */
- acpi_get_table(id, 0, &table_header);
+ if (strncmp(id, ACPI_SIG_MADT, 4) == 0)
+ acpi_get_table(id, acpi_apic_instance, &table_header);
+ else
+ acpi_get_table(id, 0, &table_header);
if (!table_header) {
printk(KERN_WARNING PREFIX "%4.4s not present\n", id);
int __init acpi_table_parse(char *id, acpi_table_handler handler)
{
struct acpi_table_header *table = NULL;
+
if (!handler)
return -EINVAL;
- acpi_get_table(id, 0, &table);
+ if (strncmp(id, ACPI_SIG_MADT, 4) == 0)
+ acpi_get_table(id, acpi_apic_instance, &table);
+ else
+ acpi_get_table(id, 0, &table);
+
if (table) {
handler(table);
return 0;
return 1;
}
+/*
+ * The BIOS is supposed to supply a single APIC/MADT,
+ * but some report two. Provide a knob to use either.
+ * (don't you wish instance 0 and 1 were not the same?)
+ */
+static void __init check_multiple_madt(void)
+{
+ struct acpi_table_header *table = NULL;
+
+ acpi_get_table(ACPI_SIG_MADT, 2, &table);
+ if (table) {
+ printk(KERN_WARNING PREFIX
+ "BIOS bug: multiple APIC/MADT found,"
+ " using %d\n", acpi_apic_instance);
+ printk(KERN_WARNING PREFIX
+ "If \"acpi_apic_instance=%d\" works better, "
+ "notify linux-acpi@vger.kernel.org\n",
+ acpi_apic_instance ? 0 : 2);
+
+ } else
+ acpi_apic_instance = 0;
+
+ return;
+}
+
/*
* acpi_table_init()
*
* result: sdt_entry[] is initialized
*/
-
int __init acpi_table_init(void)
{
acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0);
+ check_multiple_madt();
+ return 0;
+}
+
+static int __init acpi_parse_apic_instance(char *str)
+{
+
+ acpi_apic_instance = simple_strtoul(str, NULL, 0);
+
+ printk(KERN_NOTICE PREFIX "Shall use APIC/MADT table %d\n",
+ acpi_apic_instance);
+
return 0;
}
+
+early_param("acpi_apic_instance", acpi_parse_apic_instance);
del_timer(&(tz->timer));
} else {
if (timer_pending(&(tz->timer)))
- mod_timer(&(tz->timer), (HZ * sleep_time) / 1000);
+ mod_timer(&(tz->timer),
+ jiffies + (HZ * sleep_time) / 1000);
else {
tz->timer.data = (unsigned long)tz;
tz->timer.function = acpi_thermal_run;
config PATA_SCC
tristate "Toshiba's Cell Reference Set IDE support"
- depends on PCI && PPC_IBM_CELL_BLADE
+ depends on PCI && PPC_CELLEB
help
This option enables support for the built-in IDE controller on
Toshiba Cell Reference Board.
board_ahci_pi = 1,
board_ahci_vt8251 = 2,
board_ahci_ign_iferr = 3,
+ board_ahci_sb600 = 4,
/* global controller registers */
HOST_CAP = 0x00, /* host capabilities */
AHCI_FLAG_NO_NCQ = (1 << 24),
AHCI_FLAG_IGN_IRQ_IF_ERR = (1 << 25), /* ignore IRQ_IF_ERR */
AHCI_FLAG_HONOR_PI = (1 << 26), /* honor PORTS_IMPL */
+ AHCI_FLAG_IGN_SERR_INTERNAL = (1 << 27), /* ignore SERR_INTERNAL */
};
struct ahci_cmd_hdr {
.udma_mask = 0x7f, /* udma0-6 ; FIXME */
.port_ops = &ahci_ops,
},
+ /* board_ahci_sb600 */
+ {
+ .sht = &ahci_sht,
+ .flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
+ ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA |
+ ATA_FLAG_SKIP_D2H_BSY |
+ AHCI_FLAG_IGN_SERR_INTERNAL,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .udma_mask = 0x7f, /* udma0-6 ; FIXME */
+ .port_ops = &ahci_ops,
+ },
+
};
static const struct pci_device_id ahci_pci_tbl[] = {
PCI_CLASS_STORAGE_SATA_AHCI, 0xffffff, board_ahci_ign_iferr },
/* ATI */
- { PCI_VDEVICE(ATI, 0x4380), board_ahci }, /* ATI SB600 non-raid */
+ { PCI_VDEVICE(ATI, 0x4380), board_ahci_sb600 }, /* ATI SB600 non-raid */
{ PCI_VDEVICE(ATI, 0x4381), board_ahci }, /* ATI SB600 raid */
/* VIA */
if (ap->flags & AHCI_FLAG_IGN_IRQ_IF_ERR)
irq_stat &= ~PORT_IRQ_IF_ERR;
- if (irq_stat & PORT_IRQ_TF_ERR)
+ if (irq_stat & PORT_IRQ_TF_ERR) {
err_mask |= AC_ERR_DEV;
+ if (ap->flags & AHCI_FLAG_IGN_SERR_INTERNAL)
+ serror &= ~SERR_INTERNAL;
+ }
if (irq_stat & (PORT_IRQ_HBUS_ERR | PORT_IRQ_HBUS_DATA_ERR)) {
err_mask |= AC_ERR_HOST_BUS;
*gtf_address = 0UL;
*obj_loc = 0UL;
- if (noacpi)
+ if (libata_noacpi)
return 0;
if (ata_msg_probe(ap))
ata_dev_printk(atadev, KERN_DEBUG, "%s: ENTER: port#: %d\n",
__FUNCTION__, ap->port_no);
- if (noacpi || !(ap->cbl == ATA_CBL_SATA))
+ if (libata_noacpi || !(ap->cbl == ATA_CBL_SATA))
return 0;
if (!ata_dev_enabled(atadev) || (ap->flags & ATA_FLAG_DISABLED))
unsigned long gtf_address;
unsigned long obj_loc;
- if (noacpi)
+ if (libata_noacpi)
return 0;
/*
* TBD - implement PATA support. For now,
struct acpi_object_list input;
union acpi_object in_params[1];
- if (noacpi)
+ if (libata_noacpi)
return 0;
if (ata_msg_probe(ap))
module_param(ata_probe_timeout, int, 0444);
MODULE_PARM_DESC(ata_probe_timeout, "Set ATA probing timeout (seconds)");
-int noacpi;
-module_param(noacpi, int, 0444);
+int libata_noacpi = 1;
+module_param_named(noacpi, libata_noacpi, int, 0444);
MODULE_PARM_DESC(noacpi, "Disables the use of ACPI in suspend/resume when set");
MODULE_AUTHOR("Jeff Garzik");
/**
* ata_id_to_dma_mode - Identify DMA mode from id block
* @dev: device to identify
- * @mode: mode to assume if we cannot tell
+ * @unknown: mode to assume if we cannot tell
*
* Set up the timing values for the device based upon the identify
* reported values for the DMA mode. This function is used by drivers
dev->max_sectors = ATA_MAX_SECTORS;
}
+ if (ata_device_blacklisted(dev) & ATA_HORKAGE_MAX_SEC_128)
+ dev->max_sectors = min(ATA_MAX_SECTORS_128, dev->max_sectors);
+
+ /* limit ATAPI DMA to R/W commands only */
+ if (ata_device_blacklisted(dev) & ATA_HORKAGE_DMA_RW_ONLY)
+ dev->horkage |= ATA_HORKAGE_DMA_RW_ONLY;
+
if (ap->ops->dev_config)
ap->ops->dev_config(ap, dev);
{ "_NEC DV5800A", NULL, ATA_HORKAGE_NODMA },
{ "SAMSUNG CD-ROM SN-124","N001", ATA_HORKAGE_NODMA },
+ /* Weird ATAPI devices */
+ { "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 |
+ ATA_HORKAGE_DMA_RW_ONLY },
+
/* Devices we expect to fail diagnostics */
/* Devices where NCQ should be avoided */
{ "WDC WD740ADFD-00", NULL, ATA_HORKAGE_NONCQ },
/* http://thread.gmane.org/gmane.linux.ide/14907 */
{ "FUJITSU MHT2060BH", NULL, ATA_HORKAGE_NONCQ },
+ /* NCQ is broken */
+ { "Maxtor 6L250S0", "BANC1G10", ATA_HORKAGE_NONCQ },
+ /* NCQ hard hangs device under heavier load, needs hard power cycle */
+ { "Maxtor 6B250S0", "BANC1B70", ATA_HORKAGE_NONCQ },
+ /* Blacklist entries taken from Silicon Image 3124/3132
+ Windows driver .inf file - also several Linux problem reports */
+ { "HTS541060G9SA00", "MB3OC60D", ATA_HORKAGE_NONCQ, },
+ { "HTS541080G9SA00", "MB4OC60D", ATA_HORKAGE_NONCQ, },
+ { "HTS541010G9SA00", "MBZOC60D", ATA_HORKAGE_NONCQ, },
/* Devices with NCQ limits */
struct ata_port *ap = qc->ap;
int rc = 0; /* Assume ATAPI DMA is OK by default */
+ /* some drives can only do ATAPI DMA on read/write */
+ if (unlikely(qc->dev->horkage & ATA_HORKAGE_DMA_RW_ONLY)) {
+ struct scsi_cmnd *cmd = qc->scsicmd;
+ u8 *scsicmd = cmd->cmnd;
+
+ switch (scsicmd[0]) {
+ case READ_10:
+ case WRITE_10:
+ case READ_12:
+ case WRITE_12:
+ case READ_6:
+ case WRITE_6:
+ /* atapi dma maybe ok */
+ break;
+ default:
+ /* turn off atapi dma */
+ return 1;
+ }
+ }
+
if (ap->ops->check_atapi_dma)
rc = ap->ops->check_atapi_dma(qc);
{
struct ata_port *ap = qc->ap;
- ap->ops->tf_read(ap, &qc->result_tf);
qc->result_tf.flags = qc->tf.flags;
+ ap->ops->tf_read(ap, &qc->result_tf);
}
/**
* RETURNS:
* 0 on success, AC_ERR_* mask on failure
*/
-static unsigned int atapi_eh_request_sense(struct ata_device *dev,
- unsigned char *sense_buf)
+static unsigned int atapi_eh_request_sense(struct ata_queued_cmd *qc)
{
+ struct ata_device *dev = qc->dev;
+ unsigned char *sense_buf = qc->scsicmd->sense_buffer;
struct ata_port *ap = dev->ap;
struct ata_taskfile tf;
u8 cdb[ATAPI_CDB_LEN];
DPRINTK("ATAPI request sense\n");
- ata_tf_init(dev, &tf);
-
/* FIXME: is this needed? */
memset(sense_buf, 0, SCSI_SENSE_BUFFERSIZE);
- /* XXX: why tf_read here? */
- ap->ops->tf_read(ap, &tf);
-
- /* fill these in, for the case where they are -not- overwritten */
+ /* initialize sense_buf with the error register,
+ * for the case where they are -not- overwritten
+ */
sense_buf[0] = 0x70;
- sense_buf[2] = tf.feature >> 4;
+ sense_buf[2] = qc->result_tf.feature >> 4;
+
+ /* some devices time out if garbage left in tf */
+ ata_tf_init(dev, &tf);
memset(cdb, 0, ATAPI_CDB_LEN);
cdb[0] = REQUEST_SENSE;
case ATA_DEV_ATAPI:
if (!(qc->ap->pflags & ATA_PFLAG_FROZEN)) {
- tmp = atapi_eh_request_sense(qc->dev,
- qc->scsicmd->sense_buffer);
+ tmp = atapi_eh_request_sense(qc);
if (!tmp) {
/* ATA_QCFLAG_SENSE_VALID is used to
* tell atapi_qc_complete() that sense
rc = prereset(ap);
if (rc) {
if (rc == -ENOENT) {
- ata_port_printk(ap, KERN_DEBUG, "port disabled. ignoring.\n");
+ ata_port_printk(ap, KERN_DEBUG,
+ "port disabled. ignoring.\n");
ap->eh_context.i.action &= ~ATA_EH_RESET_MASK;
+
+ for (i = 0; i < ATA_MAX_DEVICES; i++)
+ classes[i] = ATA_DEV_NONE;
+
+ rc = 0;
} else
ata_port_printk(ap, KERN_ERR,
"prereset failed (errno=%d)\n", rc);
{
struct ata_eh_context *ehc = &ap->eh_context;
struct ata_device *dev;
+ unsigned int new_mask = 0;
unsigned long flags;
int i, rc = 0;
DPRINTK("ENTER\n");
- for (i = 0; i < ATA_MAX_DEVICES; i++) {
+ /* For PATA drive side cable detection to work, IDENTIFY must
+ * be done backwards such that PDIAG- is released by the slave
+ * device before the master device is identified.
+ */
+ for (i = ATA_MAX_DEVICES - 1; i >= 0; i--) {
unsigned int action, readid_flags = 0;
dev = &ap->device[i];
if (action & ATA_EH_REVALIDATE && ata_dev_ready(dev)) {
if (ata_port_offline(ap)) {
rc = -EIO;
- break;
+ goto err;
}
ata_eh_about_to_do(ap, dev, ATA_EH_REVALIDATE);
rc = ata_dev_revalidate(dev, readid_flags);
if (rc)
- break;
+ goto err;
ata_eh_done(ap, dev, ATA_EH_REVALIDATE);
rc = ata_dev_read_id(dev, &dev->class, readid_flags,
dev->id);
- if (rc == 0) {
- ehc->i.flags |= ATA_EHI_PRINTINFO;
- rc = ata_dev_configure(dev);
- ehc->i.flags &= ~ATA_EHI_PRINTINFO;
- } else if (rc == -ENOENT) {
+ switch (rc) {
+ case 0:
+ new_mask |= 1 << i;
+ break;
+ case -ENOENT:
/* IDENTIFY was issued to non-existent
* device. No need to reset. Just
* thaw and kill the device.
*/
ata_eh_thaw_port(ap);
- dev->class = ATA_DEV_UNKNOWN;
- rc = 0;
- }
-
- if (rc) {
dev->class = ATA_DEV_UNKNOWN;
break;
+ default:
+ dev->class = ATA_DEV_UNKNOWN;
+ goto err;
}
+ }
+ }
- if (ata_dev_enabled(dev)) {
- spin_lock_irqsave(ap->lock, flags);
- ap->pflags |= ATA_PFLAG_SCSI_HOTPLUG;
- spin_unlock_irqrestore(ap->lock, flags);
+ /* Configure new devices forward such that user doesn't see
+ * device detection messages backwards.
+ */
+ for (i = 0; i < ATA_MAX_DEVICES; i++) {
+ dev = &ap->device[i];
- /* new device discovered, configure xfermode */
- ehc->i.flags |= ATA_EHI_SETMODE;
- }
- }
+ if (!(new_mask & (1 << i)))
+ continue;
+
+ ehc->i.flags |= ATA_EHI_PRINTINFO;
+ rc = ata_dev_configure(dev);
+ ehc->i.flags &= ~ATA_EHI_PRINTINFO;
+ if (rc)
+ goto err;
+
+ spin_lock_irqsave(ap->lock, flags);
+ ap->pflags |= ATA_PFLAG_SCSI_HOTPLUG;
+ spin_unlock_irqrestore(ap->lock, flags);
+
+ /* new device discovered, configure xfermode */
+ ehc->i.flags |= ATA_EHI_SETMODE;
}
- if (rc)
- *r_failed_dev = dev;
+ return 0;
- DPRINTK("EXIT\n");
+ err:
+ *r_failed_dev = dev;
+ DPRINTK("EXIT rc=%d\n", rc);
return rc;
}
scsi_cmd[8] = args[3];
scsi_cmd[10] = args[4];
scsi_cmd[12] = args[5];
- scsi_cmd[13] = args[6] & 0x0f;
+ scsi_cmd[13] = args[6] & 0x4f;
scsi_cmd[14] = args[0];
/* Good values for timeout and retries? Values below
extern int atapi_enabled;
extern int atapi_dmadir;
extern int libata_fua;
-extern int noacpi;
+extern int libata_noacpi;
extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev);
extern int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
u64 block, u32 n_block, unsigned int tf_flags,
static int __devinit cs5520_init_one(struct pci_dev *dev, const struct pci_device_id *id)
{
u8 pcicfg;
- void *iomap[5];
+ void __iomem *iomap[5];
static struct ata_probe_ent probe[2];
int ports = 0;
irq = platform_get_irq(pdev, 0);
if (irq)
- set_irq_type(irq, IRQT_HIGH);
+ set_irq_type(irq, IRQT_RISING);
/* Setup expansion bus chip selects */
*data->cs0_cfg = data->cs0_bits;
struct ata_host *host = platform_get_drvdata(dev);
ata_host_detach(host);
- platform_set_drvdata(dev, NULL);
return 0;
}
ae->dev = dev;
ae->irq = priv->ata_irq;
- aio->cmd_addr = 0; /* Don't have a classic reg block */
+ aio->cmd_addr = NULL; /* Don't have a classic reg block */
aio->altstatus_addr = &priv->ata_regs->tf_control;
aio->ctl_addr = &priv->ata_regs->tf_control;
aio->data_addr = &priv->ata_regs->tf_data;
/* Cases the state machine will not complete correctly without help */
if ((tf->flags & ATA_TFLAG_LBA48) || tf->protocol == ATA_PROT_ATAPI_DMA)
{
- len = qc->nbytes;
+ len = qc->nbytes / 2;
if (tf->flags & ATA_TFLAG_WRITE)
len |= 0x06000000;
struct ata_port_info *port;
struct pci_dev *host = NULL;
struct sis_chipset *chipset = NULL;
+ struct sis_chipset *sets;
static struct sis_chipset sis_chipsets[] = {
/* We have to find the bridge first */
- for (chipset = &sis_chipsets[0]; chipset->device; chipset++) {
- host = pci_get_device(PCI_VENDOR_ID_SI, chipset->device, NULL);
+ for (sets = &sis_chipsets[0]; sets->device; sets++) {
+ host = pci_get_device(PCI_VENDOR_ID_SI, sets->device, NULL);
if (host != NULL) {
- if (chipset->device == 0x630) { /* SIS630 */
+ chipset = sets; /* Match found */
+ if (sets->device == 0x630) { /* SIS630 */
u8 host_rev;
pci_read_config_byte(host, PCI_REVISION_ID, &host_rev);
if (host_rev >= 0x30) /* 630 ET */
}
/* Look for concealed bridges */
- if (host == NULL) {
+ if (chipset == NULL) {
/* Second check */
u32 idemisc;
u16 trueid;
if (rc)
return rc;
- rc = pci_request_regions(pdev, DRV_NAME);
- if (rc)
- return rc;
-
rc = pcim_iomap_regions(pdev, 0x3f, DRV_NAME);
if (rc)
return rc;
{ PCI_VDEVICE(CMD, 0x3124), BID_SIL3124 },
{ PCI_VDEVICE(INTEL, 0x3124), BID_SIL3124 },
{ PCI_VDEVICE(CMD, 0x3132), BID_SIL3132 },
+ { PCI_VDEVICE(CMD, 0x0242), BID_SIL3132 },
{ PCI_VDEVICE(CMD, 0x3131), BID_SIL3131 },
{ PCI_VDEVICE(CMD, 0x3531), BID_SIL3131 },
return -ENOMEM;
if (!(probe_ent->port_flags & SIS_FLAG_CFGSCR)) {
- void *mmio;
+ void __iomem *mmio;
mmio = pcim_iomap(pdev, SIS_SCR_PCI_BAR, 0);
if (!mmio)
/*--------------------------------- entries ---------------------------------*/
-static int __init zatm_init(struct atm_dev *dev)
+static int __devinit zatm_init(struct atm_dev *dev)
{
struct zatm_dev *zatm_dev;
struct pci_dev *pci_dev;
}
-static int __init zatm_start(struct atm_dev *dev)
+static int __devinit zatm_start(struct atm_dev *dev)
{
struct zatm_dev *zatm_dev = ZATM_DEV(dev);
struct pci_dev *pdev = zatm_dev->pci_dev;
int (*platform_notify)(struct device * dev) = NULL;
int (*platform_notify_remove)(struct device * dev) = NULL;
-/*
- * Detect the LANANA-assigned LOCAL/EXPERIMENTAL majors
- */
-bool is_lanana_major(unsigned int major)
-{
- if (major >= 60 && major <= 63)
- return 1;
- if (major >= 120 && major <= 127)
- return 1;
- if (major >= 240 && major <= 254)
- return 1;
- return 0;
-}
-
/*
* sysfs bindings for devices.
*/
}
EXPORT_SYMBOL_GPL(device_remove_bin_file);
+/**
+ * device_schedule_callback - helper to schedule a callback for a device
+ * @dev: device.
+ * @func: callback function to invoke later.
+ *
+ * Attribute methods must not unregister themselves or their parent device
+ * (which would amount to the same thing). Attempts to do so will deadlock,
+ * since unregistration is mutually exclusive with driver callbacks.
+ *
+ * Instead methods can call this routine, which will attempt to allocate
+ * and schedule a workqueue request to call back @func with @dev as its
+ * argument in the workqueue's process context. @dev will be pinned until
+ * @func returns.
+ *
+ * Returns 0 if the request was submitted, -ENOMEM if storage could not
+ * be allocated.
+ *
+ * NOTE: This routine won't work if CONFIG_SYSFS isn't set! It uses an
+ * underlying sysfs routine (since it is intended for use by attribute
+ * methods), and if sysfs isn't available you'll get nothing but -ENOSYS.
+ */
+int device_schedule_callback(struct device *dev,
+ void (*func)(struct device *))
+{
+ return sysfs_schedule_callback(&dev->kobj,
+ (void (*)(void *)) func, dev);
+}
+EXPORT_SYMBOL_GPL(device_schedule_callback);
+
static void klist_children_get(struct klist_node *n)
{
struct device *dev = container_of(n, struct device, knode_parent);
void driver_unregister(struct device_driver * drv)
{
bus_remove_driver(drv);
- wait_for_completion(&drv->unloaded);
+ /*
+ * If the driver is a module, we are probably in
+ * the module unload path, and we want to wait
+ * for everything to unload before we can actually
+ * finish the unload.
+ */
+ if (drv->owner)
+ wait_for_completion(&drv->unloaded);
}
/**
int error;
pr_debug("PM: Adding info for %s:%s\n",
- dev->bus ? dev->bus->name : "No Bus", dev->kobj.name);
+ dev->bus ? dev->bus->name : "No Bus",
+ kobject_name(&dev->kobj));
down(&dpm_list_sem);
list_add_tail(&dev->power.entry, &dpm_active);
device_pm_set_parent(dev, dev->parent);
void device_pm_remove(struct device * dev)
{
pr_debug("PM: Removing info for %s:%s\n",
- dev->bus ? dev->bus->name : "No Bus", dev->kobj.name);
+ dev->bus ? dev->bus->name : "No Bus",
+ kobject_name(&dev->kobj));
down(&dpm_list_sem);
dpm_sysfs_remove(dev);
put_device(dev->power.pm_parent);
if (return_code == IO_OK) {
listlength =
- be32_to_cpu(*(__u32 *) ld_buff->LUNListLength);
+ be32_to_cpu(*(__be32 *) ld_buff->LUNListLength);
} else { /* reading number of logical volumes failed */
printk(KERN_WARNING "cciss: report logical volume"
" command failed\n");
"does not support reading geometry\n");
drv->heads = 255;
drv->sectors = 32; // Sectors per track
+ drv->cylinders = total_size + 1;
drv->raid_level = RAID_UNKNOWN;
} else {
drv->heads = inq_buff->data_byte[6];
ctlr, buf, sizeof(ReadCapdata_struct),
1, logvol, 0, NULL, TYPE_CMD);
if (return_code == IO_OK) {
- *total_size = be32_to_cpu(*(__u32 *) buf->total_size);
- *block_size = be32_to_cpu(*(__u32 *) buf->block_size);
+ *total_size = be32_to_cpu(*(__be32 *) buf->total_size);
+ *block_size = be32_to_cpu(*(__be32 *) buf->block_size);
} else { /* read capacity command failed */
printk(KERN_WARNING "cciss: read capacity failed\n");
*total_size = 0;
1, logvol, 0, NULL, TYPE_CMD);
}
if (return_code == IO_OK) {
- *total_size = be64_to_cpu(*(__u64 *) buf->total_size);
- *block_size = be32_to_cpu(*(__u32 *) buf->block_size);
+ *total_size = be64_to_cpu(*(__be64 *) buf->total_size);
+ *block_size = be32_to_cpu(*(__be32 *) buf->block_size);
} else { /* read capacity command failed */
printk(KERN_WARNING "cciss: read capacity failed\n");
*total_size = 0;
"already be removed \n");
return;
}
+
+ remove_proc_entry(hba[i]->devname, proc_cciss);
+ unregister_blkdev(hba[i]->major, hba[i]->devname);
+
+ /* remove it from the disk list */
+ for (j = 0; j < CISS_MAX_LUN; j++) {
+ struct gendisk *disk = hba[i]->gendisk[j];
+ if (disk) {
+ request_queue_t *q = disk->queue;
+
+ if (disk->flags & GENHD_FL_UP)
+ del_gendisk(disk);
+ if (q)
+ blk_cleanup_queue(q);
+ }
+ }
+
+ cciss_unregister_scsi(i); /* unhook from SCSI subsystem */
+
/* Turn board interrupts off and send the flush cache command */
/* sendcmd will turn off interrupt, and send the flush...
* To write all data in the battery backed cache to disks */
#endif /* CONFIG_PCI_MSI */
iounmap(hba[i]->vaddr);
- cciss_unregister_scsi(i); /* unhook from SCSI subsystem */
- unregister_blkdev(hba[i]->major, hba[i]->devname);
- remove_proc_entry(hba[i]->devname, proc_cciss);
-
- /* remove it from the disk list */
- for (j = 0; j < CISS_MAX_LUN; j++) {
- struct gendisk *disk = hba[i]->gendisk[j];
- if (disk) {
- request_queue_t *q = disk->queue;
-
- if (disk->flags & GENHD_FL_UP)
- del_gendisk(disk);
- if (q)
- blk_cleanup_queue(q);
- }
- }
pci_free_consistent(hba[i]->pdev, hba[i]->nr_cmds * sizeof(CommandList_struct),
hba[i]->cmd_pool, hba[i]->cmd_pool_dhandle);
#include <linux/blkdev.h>
#include <asm/uaccess.h>
-static spinlock_t pcd_lock;
+static DEFINE_SPINLOCK(pcd_lock);
module_param(verbose, bool, 0644);
module_param(major, int, 0);
return Fail;
pi_read_block(disk->pi, pd_scratch, 512);
disk->can_lba = pd_scratch[99] & 2;
- disk->sectors = le16_to_cpu(*(u16 *) (pd_scratch + 12));
- disk->heads = le16_to_cpu(*(u16 *) (pd_scratch + 6));
- disk->cylinders = le16_to_cpu(*(u16 *) (pd_scratch + 2));
+ disk->sectors = le16_to_cpu(*(__le16 *) (pd_scratch + 12));
+ disk->heads = le16_to_cpu(*(__le16 *) (pd_scratch + 6));
+ disk->cylinders = le16_to_cpu(*(__le16 *) (pd_scratch + 2));
if (disk->can_lba)
- disk->capacity = le32_to_cpu(*(u32 *) (pd_scratch + 120));
+ disk->capacity = le32_to_cpu(*(__le32 *) (pd_scratch + 120));
else
disk->capacity = disk->sectors * disk->heads * disk->cylinders;
#include <linux/blkpg.h>
#include <asm/uaccess.h>
-static spinlock_t pf_spin_lock;
+static DEFINE_SPINLOCK(pf_spin_lock);
module_param(verbose, bool, 0644);
module_param(major, int, 0);
rq->cmd_flags |= REQ_QUIET;
blk_execute_rq(rq->q, pd->bdev->bd_disk, rq, 0);
- ret = rq->errors;
+ if (rq->errors)
+ ret = -EIO;
out:
blk_put_request(rq);
return ret;
If you have an Alchemy AU1000 processor (MIPS based) and you want
to use a console on a serial port, say Y. Otherwise, say N.
+config SERIAL_DEC
+ bool "DECstation serial support"
+ depends on MACH_DECSTATION
+ default y
+ help
+ This selects whether you want to be asked about drivers for
+ DECstation serial ports.
+
+ Note that the answer to this question won't directly affect the
+ kernel: saying N will just cause the configurator to skip all
+ the questions about DECstation serial ports.
+
+config SERIAL_DEC_CONSOLE
+ bool "Support for console on a DECstation serial port"
+ depends on SERIAL_DEC
+ default y
+ help
+ If you say Y here, it will be possible to use a serial port as the
+ system console (the system console is the device which receives all
+ kernel messages and warnings and which allows logins in single user
+ mode). Note that the firmware uses ttyS0 as the serial console on
+ the Maxine and ttyS2 on the others.
+
+ If unsure, say Y.
+
+config ZS
+ bool "Z85C30 Serial Support"
+ depends on SERIAL_DEC
+ default y
+ help
+ Documentation on the Zilog 85C350 serial communications controller
+ is downloadable at <http://www.zilog.com/pdfs/serial/z85c30.pdf>
+
config A2232
tristate "Commodore A2232 serial support (EXPERIMENTAL)"
depends on EXPERIMENTAL && ZORRO && BROKEN_ON_SMP
#define PCI_DEVICE_ID_INTEL_82965Q_IG 0x2992
#define PCI_DEVICE_ID_INTEL_82965G_HB 0x29A0
#define PCI_DEVICE_ID_INTEL_82965G_IG 0x29A2
+#define PCI_DEVICE_ID_INTEL_82965GM_HB 0x2A00
+#define PCI_DEVICE_ID_INTEL_82965GM_IG 0x2A02
#define IS_I965 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82946GZ_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965G_1_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965Q_HB || \
- agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965G_HB)
+ agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965G_HB || \
+ agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965GM_HB)
extern int agp_memory_reserved;
if (IS_I965) {
u32 pgetbl_ctl;
+ pgetbl_ctl = readl(intel_i830_private.registers+I810_PGETBL_CTL);
- pci_read_config_dword(agp_bridge->dev, I810_PGETBL_CTL,
- &pgetbl_ctl);
/* The 965 has a field telling us the size of the GTT,
* which may be larger than what is necessary to map the
* aperture.
bridge->driver = &intel_845_driver;
name = "965G";
break;
-
+ case PCI_DEVICE_ID_INTEL_82965GM_HB:
+ if (find_i830(PCI_DEVICE_ID_INTEL_82965GM_IG))
+ bridge->driver = &intel_i965_driver;
+ else
+ bridge->driver = &intel_845_driver;
+ name = "965GM";
+ break;
case PCI_DEVICE_ID_INTEL_7505_0:
bridge->driver = &intel_7505_driver;
name = "E7505";
ID(PCI_DEVICE_ID_INTEL_82965G_1_HB),
ID(PCI_DEVICE_ID_INTEL_82965Q_HB),
ID(PCI_DEVICE_ID_INTEL_82965G_HB),
+ ID(PCI_DEVICE_ID_INTEL_82965GM_HB),
{ }
};
i830-objs := i830_drv.o i830_dma.o i830_irq.o
i915-objs := i915_drv.o i915_dma.o i915_irq.o i915_mem.o
radeon-objs := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o radeon_irq.o r300_cmdbuf.o
-ffb-objs := ffb_drv.o ffb_context.o
sis-objs := sis_drv.o sis_mm.o
savage-objs := savage_drv.o savage_bci.o savage_state.o
via-objs := via_irq.o via_drv.o via_map.o via_mm.o via_dma.o via_verifier.o via_video.o via_dmablit.o
obj-$(CONFIG_DRM_I810) += i810.o
obj-$(CONFIG_DRM_I830) += i830.o
obj-$(CONFIG_DRM_I915) += i915.o
-obj-$(CONFIG_DRM_FFB) += ffb.o
obj-$(CONFIG_DRM_SIS) += sis.o
obj-$(CONFIG_DRM_SAVAGE)+= savage.o
obj-$(CONFIG_DRM_VIA) +=via.o
+++ /dev/null
-/* $Id: ffb_context.c,v 1.5 2001/08/09 17:47:51 davem Exp $
- * ffb_context.c: Creator/Creator3D DRI/DRM context switching.
- *
- * Copyright (C) 2000 David S. Miller (davem@redhat.com)
- *
- * Almost entirely stolen from tdfx_context.c, see there
- * for authors.
- */
-
-#include <asm/upa.h>
-
-#include "ffb.h"
-#include "drmP.h"
-
-#include "ffb_drv.h"
-
-static int DRM(alloc_queue) (drm_device_t * dev, int is_2d_only) {
- ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private;
- int i;
-
- for (i = 0; i < FFB_MAX_CTXS; i++) {
- if (fpriv->hw_state[i] == NULL)
- break;
- }
- if (i == FFB_MAX_CTXS)
- return -1;
-
- fpriv->hw_state[i] = kmalloc(sizeof(struct ffb_hw_context), GFP_KERNEL);
- if (fpriv->hw_state[i] == NULL)
- return -1;
-
- fpriv->hw_state[i]->is_2d_only = is_2d_only;
-
- /* Plus one because 0 is the special DRM_KERNEL_CONTEXT. */
- return i + 1;
-}
-
-static void ffb_save_context(ffb_dev_priv_t * fpriv, int idx)
-{
- ffb_fbcPtr ffb = fpriv->regs;
- struct ffb_hw_context *ctx;
- int i;
-
- ctx = fpriv->hw_state[idx - 1];
- if (idx == 0 || ctx == NULL)
- return;
-
- if (ctx->is_2d_only) {
- /* 2D applications only care about certain pieces
- * of state.
- */
- ctx->drawop = upa_readl(&ffb->drawop);
- ctx->ppc = upa_readl(&ffb->ppc);
- ctx->wid = upa_readl(&ffb->wid);
- ctx->fg = upa_readl(&ffb->fg);
- ctx->bg = upa_readl(&ffb->bg);
- ctx->xclip = upa_readl(&ffb->xclip);
- ctx->fbc = upa_readl(&ffb->fbc);
- ctx->rop = upa_readl(&ffb->rop);
- ctx->cmp = upa_readl(&ffb->cmp);
- ctx->matchab = upa_readl(&ffb->matchab);
- ctx->magnab = upa_readl(&ffb->magnab);
- ctx->pmask = upa_readl(&ffb->pmask);
- ctx->xpmask = upa_readl(&ffb->xpmask);
- ctx->lpat = upa_readl(&ffb->lpat);
- ctx->fontxy = upa_readl(&ffb->fontxy);
- ctx->fontw = upa_readl(&ffb->fontw);
- ctx->fontinc = upa_readl(&ffb->fontinc);
-
- /* stencil/stencilctl only exists on FFB2+ and later
- * due to the introduction of 3DRAM-III.
- */
- if (fpriv->ffb_type == ffb2_vertical_plus ||
- fpriv->ffb_type == ffb2_horizontal_plus) {
- ctx->stencil = upa_readl(&ffb->stencil);
- ctx->stencilctl = upa_readl(&ffb->stencilctl);
- }
-
- for (i = 0; i < 32; i++)
- ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]);
- ctx->ucsr = upa_readl(&ffb->ucsr);
- return;
- }
-
- /* Fetch drawop. */
- ctx->drawop = upa_readl(&ffb->drawop);
-
- /* If we were saving the vertex registers, this is where
- * we would do it. We would save 32 32-bit words starting
- * at ffb->suvtx.
- */
-
- /* Capture rendering attributes. */
-
- ctx->ppc = upa_readl(&ffb->ppc); /* Pixel Processor Control */
- ctx->wid = upa_readl(&ffb->wid); /* Current WID */
- ctx->fg = upa_readl(&ffb->fg); /* Constant FG color */
- ctx->bg = upa_readl(&ffb->bg); /* Constant BG color */
- ctx->consty = upa_readl(&ffb->consty); /* Constant Y */
- ctx->constz = upa_readl(&ffb->constz); /* Constant Z */
- ctx->xclip = upa_readl(&ffb->xclip); /* X plane clip */
- ctx->dcss = upa_readl(&ffb->dcss); /* Depth Cue Scale Slope */
- ctx->vclipmin = upa_readl(&ffb->vclipmin); /* Primary XY clip, minimum */
- ctx->vclipmax = upa_readl(&ffb->vclipmax); /* Primary XY clip, maximum */
- ctx->vclipzmin = upa_readl(&ffb->vclipzmin); /* Primary Z clip, minimum */
- ctx->vclipzmax = upa_readl(&ffb->vclipzmax); /* Primary Z clip, maximum */
- ctx->dcsf = upa_readl(&ffb->dcsf); /* Depth Cue Scale Front Bound */
- ctx->dcsb = upa_readl(&ffb->dcsb); /* Depth Cue Scale Back Bound */
- ctx->dczf = upa_readl(&ffb->dczf); /* Depth Cue Scale Z Front */
- ctx->dczb = upa_readl(&ffb->dczb); /* Depth Cue Scale Z Back */
- ctx->blendc = upa_readl(&ffb->blendc); /* Alpha Blend Control */
- ctx->blendc1 = upa_readl(&ffb->blendc1); /* Alpha Blend Color 1 */
- ctx->blendc2 = upa_readl(&ffb->blendc2); /* Alpha Blend Color 2 */
- ctx->fbc = upa_readl(&ffb->fbc); /* Frame Buffer Control */
- ctx->rop = upa_readl(&ffb->rop); /* Raster Operation */
- ctx->cmp = upa_readl(&ffb->cmp); /* Compare Controls */
- ctx->matchab = upa_readl(&ffb->matchab); /* Buffer A/B Match Ops */
- ctx->matchc = upa_readl(&ffb->matchc); /* Buffer C Match Ops */
- ctx->magnab = upa_readl(&ffb->magnab); /* Buffer A/B Magnitude Ops */
- ctx->magnc = upa_readl(&ffb->magnc); /* Buffer C Magnitude Ops */
- ctx->pmask = upa_readl(&ffb->pmask); /* RGB Plane Mask */
- ctx->xpmask = upa_readl(&ffb->xpmask); /* X Plane Mask */
- ctx->ypmask = upa_readl(&ffb->ypmask); /* Y Plane Mask */
- ctx->zpmask = upa_readl(&ffb->zpmask); /* Z Plane Mask */
-
- /* Auxiliary Clips. */
- ctx->auxclip0min = upa_readl(&ffb->auxclip[0].min);
- ctx->auxclip0max = upa_readl(&ffb->auxclip[0].max);
- ctx->auxclip1min = upa_readl(&ffb->auxclip[1].min);
- ctx->auxclip1max = upa_readl(&ffb->auxclip[1].max);
- ctx->auxclip2min = upa_readl(&ffb->auxclip[2].min);
- ctx->auxclip2max = upa_readl(&ffb->auxclip[2].max);
- ctx->auxclip3min = upa_readl(&ffb->auxclip[3].min);
- ctx->auxclip3max = upa_readl(&ffb->auxclip[3].max);
-
- ctx->lpat = upa_readl(&ffb->lpat); /* Line Pattern */
- ctx->fontxy = upa_readl(&ffb->fontxy); /* XY Font Coordinate */
- ctx->fontw = upa_readl(&ffb->fontw); /* Font Width */
- ctx->fontinc = upa_readl(&ffb->fontinc); /* Font X/Y Increment */
-
- /* These registers/features only exist on FFB2 and later chips. */
- if (fpriv->ffb_type >= ffb2_prototype) {
- ctx->dcss1 = upa_readl(&ffb->dcss1); /* Depth Cue Scale Slope 1 */
- ctx->dcss2 = upa_readl(&ffb->dcss2); /* Depth Cue Scale Slope 2 */
- ctx->dcss2 = upa_readl(&ffb->dcss3); /* Depth Cue Scale Slope 3 */
- ctx->dcs2 = upa_readl(&ffb->dcs2); /* Depth Cue Scale 2 */
- ctx->dcs3 = upa_readl(&ffb->dcs3); /* Depth Cue Scale 3 */
- ctx->dcs4 = upa_readl(&ffb->dcs4); /* Depth Cue Scale 4 */
- ctx->dcd2 = upa_readl(&ffb->dcd2); /* Depth Cue Depth 2 */
- ctx->dcd3 = upa_readl(&ffb->dcd3); /* Depth Cue Depth 3 */
- ctx->dcd4 = upa_readl(&ffb->dcd4); /* Depth Cue Depth 4 */
-
- /* And stencil/stencilctl only exists on FFB2+ and later
- * due to the introduction of 3DRAM-III.
- */
- if (fpriv->ffb_type == ffb2_vertical_plus ||
- fpriv->ffb_type == ffb2_horizontal_plus) {
- ctx->stencil = upa_readl(&ffb->stencil);
- ctx->stencilctl = upa_readl(&ffb->stencilctl);
- }
- }
-
- /* Save the 32x32 area pattern. */
- for (i = 0; i < 32; i++)
- ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]);
-
- /* Finally, stash away the User Constol/Status Register. */
- ctx->ucsr = upa_readl(&ffb->ucsr);
-}
-
-static void ffb_restore_context(ffb_dev_priv_t * fpriv, int old, int idx)
-{
- ffb_fbcPtr ffb = fpriv->regs;
- struct ffb_hw_context *ctx;
- int i;
-
- ctx = fpriv->hw_state[idx - 1];
- if (idx == 0 || ctx == NULL)
- return;
-
- if (ctx->is_2d_only) {
- /* 2D applications only care about certain pieces
- * of state.
- */
- upa_writel(ctx->drawop, &ffb->drawop);
-
- /* If we were restoring the vertex registers, this is where
- * we would do it. We would restore 32 32-bit words starting
- * at ffb->suvtx.
- */
-
- upa_writel(ctx->ppc, &ffb->ppc);
- upa_writel(ctx->wid, &ffb->wid);
- upa_writel(ctx->fg, &ffb->fg);
- upa_writel(ctx->bg, &ffb->bg);
- upa_writel(ctx->xclip, &ffb->xclip);
- upa_writel(ctx->fbc, &ffb->fbc);
- upa_writel(ctx->rop, &ffb->rop);
- upa_writel(ctx->cmp, &ffb->cmp);
- upa_writel(ctx->matchab, &ffb->matchab);
- upa_writel(ctx->magnab, &ffb->magnab);
- upa_writel(ctx->pmask, &ffb->pmask);
- upa_writel(ctx->xpmask, &ffb->xpmask);
- upa_writel(ctx->lpat, &ffb->lpat);
- upa_writel(ctx->fontxy, &ffb->fontxy);
- upa_writel(ctx->fontw, &ffb->fontw);
- upa_writel(ctx->fontinc, &ffb->fontinc);
-
- /* stencil/stencilctl only exists on FFB2+ and later
- * due to the introduction of 3DRAM-III.
- */
- if (fpriv->ffb_type == ffb2_vertical_plus ||
- fpriv->ffb_type == ffb2_horizontal_plus) {
- upa_writel(ctx->stencil, &ffb->stencil);
- upa_writel(ctx->stencilctl, &ffb->stencilctl);
- upa_writel(0x80000000, &ffb->fbc);
- upa_writel((ctx->stencilctl | 0x80000),
- &ffb->rawstencilctl);
- upa_writel(ctx->fbc, &ffb->fbc);
- }
-
- for (i = 0; i < 32; i++)
- upa_writel(ctx->area_pattern[i], &ffb->pattern[i]);
- upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr);
- return;
- }
-
- /* Restore drawop. */
- upa_writel(ctx->drawop, &ffb->drawop);
-
- /* If we were restoring the vertex registers, this is where
- * we would do it. We would restore 32 32-bit words starting
- * at ffb->suvtx.
- */
-
- /* Restore rendering attributes. */
-
- upa_writel(ctx->ppc, &ffb->ppc); /* Pixel Processor Control */
- upa_writel(ctx->wid, &ffb->wid); /* Current WID */
- upa_writel(ctx->fg, &ffb->fg); /* Constant FG color */
- upa_writel(ctx->bg, &ffb->bg); /* Constant BG color */
- upa_writel(ctx->consty, &ffb->consty); /* Constant Y */
- upa_writel(ctx->constz, &ffb->constz); /* Constant Z */
- upa_writel(ctx->xclip, &ffb->xclip); /* X plane clip */
- upa_writel(ctx->dcss, &ffb->dcss); /* Depth Cue Scale Slope */
- upa_writel(ctx->vclipmin, &ffb->vclipmin); /* Primary XY clip, minimum */
- upa_writel(ctx->vclipmax, &ffb->vclipmax); /* Primary XY clip, maximum */
- upa_writel(ctx->vclipzmin, &ffb->vclipzmin); /* Primary Z clip, minimum */
- upa_writel(ctx->vclipzmax, &ffb->vclipzmax); /* Primary Z clip, maximum */
- upa_writel(ctx->dcsf, &ffb->dcsf); /* Depth Cue Scale Front Bound */
- upa_writel(ctx->dcsb, &ffb->dcsb); /* Depth Cue Scale Back Bound */
- upa_writel(ctx->dczf, &ffb->dczf); /* Depth Cue Scale Z Front */
- upa_writel(ctx->dczb, &ffb->dczb); /* Depth Cue Scale Z Back */
- upa_writel(ctx->blendc, &ffb->blendc); /* Alpha Blend Control */
- upa_writel(ctx->blendc1, &ffb->blendc1); /* Alpha Blend Color 1 */
- upa_writel(ctx->blendc2, &ffb->blendc2); /* Alpha Blend Color 2 */
- upa_writel(ctx->fbc, &ffb->fbc); /* Frame Buffer Control */
- upa_writel(ctx->rop, &ffb->rop); /* Raster Operation */
- upa_writel(ctx->cmp, &ffb->cmp); /* Compare Controls */
- upa_writel(ctx->matchab, &ffb->matchab); /* Buffer A/B Match Ops */
- upa_writel(ctx->matchc, &ffb->matchc); /* Buffer C Match Ops */
- upa_writel(ctx->magnab, &ffb->magnab); /* Buffer A/B Magnitude Ops */
- upa_writel(ctx->magnc, &ffb->magnc); /* Buffer C Magnitude Ops */
- upa_writel(ctx->pmask, &ffb->pmask); /* RGB Plane Mask */
- upa_writel(ctx->xpmask, &ffb->xpmask); /* X Plane Mask */
- upa_writel(ctx->ypmask, &ffb->ypmask); /* Y Plane Mask */
- upa_writel(ctx->zpmask, &ffb->zpmask); /* Z Plane Mask */
-
- /* Auxiliary Clips. */
- upa_writel(ctx->auxclip0min, &ffb->auxclip[0].min);
- upa_writel(ctx->auxclip0max, &ffb->auxclip[0].max);
- upa_writel(ctx->auxclip1min, &ffb->auxclip[1].min);
- upa_writel(ctx->auxclip1max, &ffb->auxclip[1].max);
- upa_writel(ctx->auxclip2min, &ffb->auxclip[2].min);
- upa_writel(ctx->auxclip2max, &ffb->auxclip[2].max);
- upa_writel(ctx->auxclip3min, &ffb->auxclip[3].min);
- upa_writel(ctx->auxclip3max, &ffb->auxclip[3].max);
-
- upa_writel(ctx->lpat, &ffb->lpat); /* Line Pattern */
- upa_writel(ctx->fontxy, &ffb->fontxy); /* XY Font Coordinate */
- upa_writel(ctx->fontw, &ffb->fontw); /* Font Width */
- upa_writel(ctx->fontinc, &ffb->fontinc); /* Font X/Y Increment */
-
- /* These registers/features only exist on FFB2 and later chips. */
- if (fpriv->ffb_type >= ffb2_prototype) {
- upa_writel(ctx->dcss1, &ffb->dcss1); /* Depth Cue Scale Slope 1 */
- upa_writel(ctx->dcss2, &ffb->dcss2); /* Depth Cue Scale Slope 2 */
- upa_writel(ctx->dcss3, &ffb->dcss2); /* Depth Cue Scale Slope 3 */
- upa_writel(ctx->dcs2, &ffb->dcs2); /* Depth Cue Scale 2 */
- upa_writel(ctx->dcs3, &ffb->dcs3); /* Depth Cue Scale 3 */
- upa_writel(ctx->dcs4, &ffb->dcs4); /* Depth Cue Scale 4 */
- upa_writel(ctx->dcd2, &ffb->dcd2); /* Depth Cue Depth 2 */
- upa_writel(ctx->dcd3, &ffb->dcd3); /* Depth Cue Depth 3 */
- upa_writel(ctx->dcd4, &ffb->dcd4); /* Depth Cue Depth 4 */
-
- /* And stencil/stencilctl only exists on FFB2+ and later
- * due to the introduction of 3DRAM-III.
- */
- if (fpriv->ffb_type == ffb2_vertical_plus ||
- fpriv->ffb_type == ffb2_horizontal_plus) {
- /* Unfortunately, there is a hardware bug on
- * the FFB2+ chips which prevents a normal write
- * to the stencil control register from working
- * as it should.
- *
- * The state controlled by the FFB stencilctl register
- * really gets transferred to the per-buffer instances
- * of the stencilctl register in the 3DRAM chips.
- *
- * The bug is that FFB does not update buffer C correctly,
- * so we have to do it by hand for them.
- */
-
- /* This will update buffers A and B. */
- upa_writel(ctx->stencil, &ffb->stencil);
- upa_writel(ctx->stencilctl, &ffb->stencilctl);
-
- /* Force FFB to use buffer C 3dram regs. */
- upa_writel(0x80000000, &ffb->fbc);
- upa_writel((ctx->stencilctl | 0x80000),
- &ffb->rawstencilctl);
-
- /* Now restore the correct FBC controls. */
- upa_writel(ctx->fbc, &ffb->fbc);
- }
- }
-
- /* Restore the 32x32 area pattern. */
- for (i = 0; i < 32; i++)
- upa_writel(ctx->area_pattern[i], &ffb->pattern[i]);
-
- /* Finally, stash away the User Constol/Status Register.
- * The only state we really preserve here is the picking
- * control.
- */
- upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr);
-}
-
-#define FFB_UCSR_FB_BUSY 0x01000000
-#define FFB_UCSR_RP_BUSY 0x02000000
-#define FFB_UCSR_ALL_BUSY (FFB_UCSR_RP_BUSY|FFB_UCSR_FB_BUSY)
-
-static void FFBWait(ffb_fbcPtr ffb)
-{
- int limit = 100000;
-
- do {
- u32 regval = upa_readl(&ffb->ucsr);
-
- if ((regval & FFB_UCSR_ALL_BUSY) == 0)
- break;
- } while (--limit);
-}
-
-int ffb_driver_context_switch(drm_device_t * dev, int old, int new)
-{
- ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private;
-
-#ifdef DRM_DMA_HISTOGRAM
- dev->ctx_start = get_cycles();
-#endif
-
- DRM_DEBUG("Context switch from %d to %d\n", old, new);
-
- if (new == dev->last_context || dev->last_context == 0) {
- dev->last_context = new;
- return 0;
- }
-
- FFBWait(fpriv->regs);
- ffb_save_context(fpriv, old);
- ffb_restore_context(fpriv, old, new);
- FFBWait(fpriv->regs);
-
- dev->last_context = new;
-
- return 0;
-}
-
-int ffb_driver_resctx(struct inode *inode, struct file *filp, unsigned int cmd,
- unsigned long arg)
-{
- drm_ctx_res_t res;
- drm_ctx_t ctx;
- int i;
-
- DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS);
- if (copy_from_user(&res, (drm_ctx_res_t __user *) arg, sizeof(res)))
- return -EFAULT;
- if (res.count >= DRM_RESERVED_CONTEXTS) {
- memset(&ctx, 0, sizeof(ctx));
- for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
- ctx.handle = i;
- if (copy_to_user(&res.contexts[i], &i, sizeof(i)))
- return -EFAULT;
- }
- }
- res.count = DRM_RESERVED_CONTEXTS;
- if (copy_to_user((drm_ctx_res_t __user *) arg, &res, sizeof(res)))
- return -EFAULT;
- return 0;
-}
-
-int ffb_driver_addctx(struct inode *inode, struct file *filp, unsigned int cmd,
- unsigned long arg)
-{
- drm_file_t *priv = filp->private_data;
- drm_device_t *dev = priv->dev;
- drm_ctx_t ctx;
- int idx;
-
- if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx)))
- return -EFAULT;
- idx = DRM(alloc_queue) (dev, (ctx.flags & _DRM_CONTEXT_2DONLY));
- if (idx < 0)
- return -ENFILE;
-
- DRM_DEBUG("%d\n", ctx.handle);
- ctx.handle = idx;
- if (copy_to_user((drm_ctx_t __user *) arg, &ctx, sizeof(ctx)))
- return -EFAULT;
- return 0;
-}
-
-int ffb_driver_modctx(struct inode *inode, struct file *filp, unsigned int cmd,
- unsigned long arg)
-{
- drm_file_t *priv = filp->private_data;
- drm_device_t *dev = priv->dev;
- ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private;
- struct ffb_hw_context *hwctx;
- drm_ctx_t ctx;
- int idx;
-
- if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx)))
- return -EFAULT;
-
- idx = ctx.handle;
- if (idx <= 0 || idx >= FFB_MAX_CTXS)
- return -EINVAL;
-
- hwctx = fpriv->hw_state[idx - 1];
- if (hwctx == NULL)
- return -EINVAL;
-
- if ((ctx.flags & _DRM_CONTEXT_2DONLY) == 0)
- hwctx->is_2d_only = 0;
- else
- hwctx->is_2d_only = 1;
-
- return 0;
-}
-
-int ffb_driver_getctx(struct inode *inode, struct file *filp, unsigned int cmd,
- unsigned long arg)
-{
- drm_file_t *priv = filp->private_data;
- drm_device_t *dev = priv->dev;
- ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private;
- struct ffb_hw_context *hwctx;
- drm_ctx_t ctx;
- int idx;
-
- if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx)))
- return -EFAULT;
-
- idx = ctx.handle;
- if (idx <= 0 || idx >= FFB_MAX_CTXS)
- return -EINVAL;
-
- hwctx = fpriv->hw_state[idx - 1];
- if (hwctx == NULL)
- return -EINVAL;
-
- if (hwctx->is_2d_only != 0)
- ctx.flags = _DRM_CONTEXT_2DONLY;
- else
- ctx.flags = 0;
-
- if (copy_to_user((drm_ctx_t __user *) arg, &ctx, sizeof(ctx)))
- return -EFAULT;
-
- return 0;
-}
-
-int ffb_driver_switchctx(struct inode *inode, struct file *filp,
- unsigned int cmd, unsigned long arg)
-{
- drm_file_t *priv = filp->private_data;
- drm_device_t *dev = priv->dev;
- drm_ctx_t ctx;
-
- if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx)))
- return -EFAULT;
- DRM_DEBUG("%d\n", ctx.handle);
- return ffb_driver_context_switch(dev, dev->last_context, ctx.handle);
-}
-
-int ffb_driver_newctx(struct inode *inode, struct file *filp, unsigned int cmd,
- unsigned long arg)
-{
- drm_ctx_t ctx;
-
- if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx)))
- return -EFAULT;
- DRM_DEBUG("%d\n", ctx.handle);
-
- return 0;
-}
-
-int ffb_driver_rmctx(struct inode *inode, struct file *filp, unsigned int cmd,
- unsigned long arg)
-{
- drm_ctx_t ctx;
- drm_file_t *priv = filp->private_data;
- drm_device_t *dev = priv->dev;
- ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private;
- int idx;
-
- if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx)))
- return -EFAULT;
- DRM_DEBUG("%d\n", ctx.handle);
-
- idx = ctx.handle - 1;
- if (idx < 0 || idx >= FFB_MAX_CTXS)
- return -EINVAL;
-
- kfree(fpriv->hw_state[idx]);
- fpriv->hw_state[idx] = NULL;
- return 0;
-}
-
-void ffb_set_context_ioctls(void)
-{
- DRM(ioctls)[DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)].func = ffb_driver_addctx;
- DRM(ioctls)[DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)].func = ffb_driver_rmctx;
- DRM(ioctls)[DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)].func = ffb_driver_modctx;
- DRM(ioctls)[DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)].func = ffb_driver_getctx;
- DRM(ioctls)[DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)].func =
- ffb_driver_switchctx;
- DRM(ioctls)[DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)].func = ffb_driver_newctx;
- DRM(ioctls)[DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)].func = ffb_driver_resctx;
-
-}
+++ /dev/null
-/* $Id: ffb_drv.c,v 1.16 2001/10/18 16:00:24 davem Exp $
- * ffb_drv.c: Creator/Creator3D direct rendering driver.
- *
- * Copyright (C) 2000 David S. Miller (davem@redhat.com)
- */
-
-#include "ffb.h"
-#include "drmP.h"
-
-#include "ffb_drv.h"
-
-#include <linux/smp_lock.h>
-#include <asm/shmparam.h>
-#include <asm/oplib.h>
-#include <asm/upa.h>
-
-#define DRIVER_AUTHOR "David S. Miller"
-
-#define DRIVER_NAME "ffb"
-#define DRIVER_DESC "Creator/Creator3D"
-#define DRIVER_DATE "20000517"
-
-#define DRIVER_MAJOR 0
-#define DRIVER_MINOR 0
-#define DRIVER_PATCHLEVEL 1
-
-typedef struct _ffb_position_t {
- int node;
- int root;
-} ffb_position_t;
-
-static ffb_position_t *ffb_position;
-
-static void get_ffb_type(ffb_dev_priv_t * ffb_priv, int instance)
-{
- volatile unsigned char *strap_bits;
- unsigned char val;
-
- strap_bits = (volatile unsigned char *)
- (ffb_priv->card_phys_base + 0x00200000UL);
-
- /* Don't ask, you have to read the value twice for whatever
- * reason to get correct contents.
- */
- val = upa_readb(strap_bits);
- val = upa_readb(strap_bits);
- switch (val & 0x78) {
- case (0x0 << 5) | (0x0 << 3):
- ffb_priv->ffb_type = ffb1_prototype;
- printk("ffb%d: Detected FFB1 pre-FCS prototype\n", instance);
- break;
- case (0x0 << 5) | (0x1 << 3):
- ffb_priv->ffb_type = ffb1_standard;
- printk("ffb%d: Detected FFB1\n", instance);
- break;
- case (0x0 << 5) | (0x3 << 3):
- ffb_priv->ffb_type = ffb1_speedsort;
- printk("ffb%d: Detected FFB1-SpeedSort\n", instance);
- break;
- case (0x1 << 5) | (0x0 << 3):
- ffb_priv->ffb_type = ffb2_prototype;
- printk("ffb%d: Detected FFB2/vertical pre-FCS prototype\n",
- instance);
- break;
- case (0x1 << 5) | (0x1 << 3):
- ffb_priv->ffb_type = ffb2_vertical;
- printk("ffb%d: Detected FFB2/vertical\n", instance);
- break;
- case (0x1 << 5) | (0x2 << 3):
- ffb_priv->ffb_type = ffb2_vertical_plus;
- printk("ffb%d: Detected FFB2+/vertical\n", instance);
- break;
- case (0x2 << 5) | (0x0 << 3):
- ffb_priv->ffb_type = ffb2_horizontal;
- printk("ffb%d: Detected FFB2/horizontal\n", instance);
- break;
- case (0x2 << 5) | (0x2 << 3):
- ffb_priv->ffb_type = ffb2_horizontal;
- printk("ffb%d: Detected FFB2+/horizontal\n", instance);
- break;
- default:
- ffb_priv->ffb_type = ffb2_vertical;
- printk("ffb%d: Unknown boardID[%08x], assuming FFB2\n",
- instance, val);
- break;
- };
-}
-
-static void ffb_apply_upa_parent_ranges(int parent,
- struct linux_prom64_registers *regs)
-{
- struct linux_prom64_ranges ranges[PROMREG_MAX];
- char name[128];
- int len, i;
-
- prom_getproperty(parent, "name", name, sizeof(name));
- if (strcmp(name, "upa") != 0)
- return;
-
- len =
- prom_getproperty(parent, "ranges", (void *)ranges, sizeof(ranges));
- if (len <= 0)
- return;
-
- len /= sizeof(struct linux_prom64_ranges);
- for (i = 0; i < len; i++) {
- struct linux_prom64_ranges *rng = &ranges[i];
- u64 phys_addr = regs->phys_addr;
-
- if (phys_addr >= rng->ot_child_base &&
- phys_addr < (rng->ot_child_base + rng->or_size)) {
- regs->phys_addr -= rng->ot_child_base;
- regs->phys_addr += rng->ot_parent_base;
- return;
- }
- }
-
- return;
-}
-
-static int ffb_init_one(drm_device_t * dev, int prom_node, int parent_node,
- int instance)
-{
- struct linux_prom64_registers regs[2 * PROMREG_MAX];
- ffb_dev_priv_t *ffb_priv = (ffb_dev_priv_t *) dev->dev_private;
- int i;
-
- ffb_priv->prom_node = prom_node;
- if (prom_getproperty(ffb_priv->prom_node, "reg",
- (void *)regs, sizeof(regs)) <= 0) {
- return -EINVAL;
- }
- ffb_apply_upa_parent_ranges(parent_node, ®s[0]);
- ffb_priv->card_phys_base = regs[0].phys_addr;
- ffb_priv->regs = (ffb_fbcPtr)
- (regs[0].phys_addr + 0x00600000UL);
- get_ffb_type(ffb_priv, instance);
- for (i = 0; i < FFB_MAX_CTXS; i++)
- ffb_priv->hw_state[i] = NULL;
-
- return 0;
-}
-
-static drm_map_t *ffb_find_map(struct file *filp, unsigned long off)
-{
- drm_file_t *priv = filp->private_data;
- drm_device_t *dev;
- drm_map_list_t *r_list;
- struct list_head *list;
- drm_map_t *map;
-
- if (!priv || (dev = priv->dev) == NULL)
- return NULL;
-
- list_for_each(list, &dev->maplist->head) {
- r_list = (drm_map_list_t *) list;
- map = r_list->map;
- if (!map)
- continue;
- if (r_list->user_token == off)
- return map;
- }
-
- return NULL;
-}
-
-unsigned long ffb_get_unmapped_area(struct file *filp,
- unsigned long hint,
- unsigned long len,
- unsigned long pgoff, unsigned long flags)
-{
- drm_map_t *map = ffb_find_map(filp, pgoff << PAGE_SHIFT);
- unsigned long addr = -ENOMEM;
-
- if (!map)
- return get_unmapped_area(NULL, hint, len, pgoff, flags);
-
- if (map->type == _DRM_FRAME_BUFFER || map->type == _DRM_REGISTERS) {
-#ifdef HAVE_ARCH_FB_UNMAPPED_AREA
- addr = get_fb_unmapped_area(filp, hint, len, pgoff, flags);
-#else
- addr = get_unmapped_area(NULL, hint, len, pgoff, flags);
-#endif
- } else if (map->type == _DRM_SHM && SHMLBA > PAGE_SIZE) {
- unsigned long slack = SHMLBA - PAGE_SIZE;
-
- addr = get_unmapped_area(NULL, hint, len + slack, pgoff, flags);
- if (!(addr & ~PAGE_MASK)) {
- unsigned long kvirt = (unsigned long)map->handle;
-
- if ((kvirt & (SHMLBA - 1)) != (addr & (SHMLBA - 1))) {
- unsigned long koff, aoff;
-
- koff = kvirt & (SHMLBA - 1);
- aoff = addr & (SHMLBA - 1);
- if (koff < aoff)
- koff += SHMLBA;
-
- addr += (koff - aoff);
- }
- }
- } else {
- addr = get_unmapped_area(NULL, hint, len, pgoff, flags);
- }
-
- return addr;
-}
-
-static int ffb_presetup(drm_device_t * dev)
-{
- ffb_dev_priv_t *ffb_priv;
- int ret = 0;
- int i = 0;
-
- /* Check for the case where no device was found. */
- if (ffb_position == NULL)
- return -ENODEV;
-
- /* code used to use numdevs no numdevs anymore */
- ffb_priv = kmalloc(sizeof(ffb_dev_priv_t), GFP_KERNEL);
- if (!ffb_priv)
- return -ENOMEM;
- memset(ffb_priv, 0, sizeof(*ffb_priv));
- dev->dev_private = ffb_priv;
-
- ret = ffb_init_one(dev, ffb_position[i].node, ffb_position[i].root, i);
- return ret;
-}
-
-static void ffb_driver_release(drm_device_t * dev, struct file *filp)
-{
- ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private;
- int context = _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock);
- int idx;
-
- idx = context - 1;
- if (fpriv &&
- context != DRM_KERNEL_CONTEXT && fpriv->hw_state[idx] != NULL) {
- kfree(fpriv->hw_state[idx]);
- fpriv->hw_state[idx] = NULL;
- }
-}
-
-static void ffb_driver_pretakedown(drm_device_t * dev)
-{
- kfree(dev->dev_private);
-}
-
-static int ffb_driver_postcleanup(drm_device_t * dev)
-{
- kfree(ffb_position);
- return 0;
-}
-
-static void ffb_driver_kernel_context_switch_unlock(struct drm_device *dev,
- drm_lock_t * lock)
-{
- dev->lock.filp = 0;
- {
- __volatile__ unsigned int *plock = &dev->lock.hw_lock->lock;
- unsigned int old, new, prev, ctx;
-
- ctx = lock->context;
- do {
- old = *plock;
- new = ctx;
- prev = cmpxchg(plock, old, new);
- } while (prev != old);
- }
- wake_up_interruptible(&dev->lock.lock_queue);
-}
-
-static unsigned long ffb_driver_get_map_ofs(drm_map_t * map)
-{
- return (map->offset & 0xffffffff);
-}
-
-static unsigned long ffb_driver_get_reg_ofs(drm_device_t * dev)
-{
- ffb_dev_priv_t *ffb_priv = (ffb_dev_priv_t *) dev->dev_private;
-
- if (ffb_priv)
- return ffb_priv->card_phys_base;
-
- return 0;
-}
-
-static int postinit(struct drm_device *dev, unsigned long flags)
-{
- DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
- DRIVER_NAME,
- DRIVER_MAJOR,
- DRIVER_MINOR, DRIVER_PATCHLEVEL, DRIVER_DATE, dev->minor);
- return 0;
-}
-
-static int version(drm_version_t * version)
-{
- int len;
-
- version->version_major = DRIVER_MAJOR;
- version->version_minor = DRIVER_MINOR;
- version->version_patchlevel = DRIVER_PATCHLEVEL;
- DRM_COPY(version->name, DRIVER_NAME);
- DRM_COPY(version->date, DRIVER_DATE);
- DRM_COPY(version->desc, DRIVER_DESC);
- return 0;
-}
-
-static drm_ioctl_desc_t ioctls[] = {
-
-};
-
-static struct drm_driver driver = {
- .driver_features = 0,
- .dev_priv_size = sizeof(u32),
- .release = ffb_driver_release,
- .presetup = ffb_presetup,
- .pretakedown = ffb_driver_pretakedown,
- .postcleanup = ffb_driver_postcleanup,
- .kernel_context_switch = ffb_driver_context_switch,
- .kernel_context_switch_unlock = ffb_driver_kernel_context_switch_unlock,
- .get_map_ofs = ffb_driver_get_map_ofs,
- .get_reg_ofs = ffb_driver_get_reg_ofs,
- .postinit = postinit,
- .version = version,
- .ioctls = ioctls,
- .num_ioctls = DRM_ARRAY_SIZE(ioctls),
- .fops = {
- .owner = THIS_MODULE,
- .open = drm_open,
- .release = drm_release,
- .ioctl = drm_ioctl,
- .mmap = drm_mmap,
- .poll = drm_poll,
- .fasync = drm_fasync,
- }
- ,
-};
-
-static int __init ffb_init(void)
-{
- return -ENODEV;
-}
-
-static void __exit ffb_exit(void)
-{
-}
-
-module_init(ffb_init);
-module_exit(ffb_exit);
-
-MODULE_AUTHOR(DRIVER_AUTHOR);
-MODULE_DESCRIPTION(DRIVER_DESC);
-MODULE_LICENSE("GPL and additional rights");
+++ /dev/null
-/* $Id: ffb_drv.h,v 1.1 2000/06/01 04:24:39 davem Exp $
- * ffb_drv.h: Creator/Creator3D direct rendering driver.
- *
- * Copyright (C) 2000 David S. Miller (davem@redhat.com)
- */
-
-/* Auxilliary clips. */
-typedef struct {
- volatile unsigned int min;
- volatile unsigned int max;
-} ffb_auxclip, *ffb_auxclipPtr;
-
-/* FFB register set. */
-typedef struct _ffb_fbc {
- /* Next vertex registers, on the right we list which drawops
- * use said register and the logical name the register has in
- * that context.
- *//* DESCRIPTION DRAWOP(NAME) */
- /*0x00*/ unsigned int pad1[3];
- /* Reserved */
- /*0x0c*/ volatile unsigned int alpha;
- /* ALPHA Transparency */
- /*0x10*/ volatile unsigned int red;
- /* RED */
- /*0x14*/ volatile unsigned int green;
- /* GREEN */
- /*0x18*/ volatile unsigned int blue;
- /* BLUE */
- /*0x1c*/ volatile unsigned int z;
- /* DEPTH */
- /*0x20*/ volatile unsigned int y;
- /* Y triangle(DOYF) */
- /* aadot(DYF) */
- /* ddline(DYF) */
- /* aaline(DYF) */
- /*0x24*/ volatile unsigned int x;
- /* X triangle(DOXF) */
- /* aadot(DXF) */
- /* ddline(DXF) */
- /* aaline(DXF) */
- /*0x28*/ unsigned int pad2[2];
- /* Reserved */
- /*0x30*/ volatile unsigned int ryf;
- /* Y (alias to DOYF) ddline(RYF) */
- /* aaline(RYF) */
- /* triangle(RYF) */
- /*0x34*/ volatile unsigned int rxf;
- /* X ddline(RXF) */
- /* aaline(RXF) */
- /* triangle(RXF) */
- /*0x38*/ unsigned int pad3[2];
- /* Reserved */
- /*0x40*/ volatile unsigned int dmyf;
- /* Y (alias to DOYF) triangle(DMYF) */
- /*0x44*/ volatile unsigned int dmxf;
- /* X triangle(DMXF) */
- /*0x48*/ unsigned int pad4[2];
- /* Reserved */
- /*0x50*/ volatile unsigned int ebyi;
- /* Y (alias to RYI) polygon(EBYI) */
- /*0x54*/ volatile unsigned int ebxi;
- /* X polygon(EBXI) */
- /*0x58*/ unsigned int pad5[2];
- /* Reserved */
- /*0x60*/ volatile unsigned int by;
- /* Y brline(RYI) */
- /* fastfill(OP) */
- /* polygon(YI) */
- /* rectangle(YI) */
- /* bcopy(SRCY) */
- /* vscroll(SRCY) */
- /*0x64*/ volatile unsigned int bx;
- /* X brline(RXI) */
- /* polygon(XI) */
- /* rectangle(XI) */
- /* bcopy(SRCX) */
- /* vscroll(SRCX) */
- /* fastfill(GO) */
- /*0x68*/ volatile unsigned int dy;
- /* destination Y fastfill(DSTY) */
- /* bcopy(DSRY) */
- /* vscroll(DSRY) */
- /*0x6c*/ volatile unsigned int dx;
- /* destination X fastfill(DSTX) */
- /* bcopy(DSTX) */
- /* vscroll(DSTX) */
- /*0x70*/ volatile unsigned int bh;
- /* Y (alias to RYI) brline(DYI) */
- /* dot(DYI) */
- /* polygon(ETYI) */
- /* Height fastfill(H) */
- /* bcopy(H) */
- /* vscroll(H) */
- /* Y count fastfill(NY) */
- /*0x74*/ volatile unsigned int bw;
- /* X dot(DXI) */
- /* brline(DXI) */
- /* polygon(ETXI) */
- /* fastfill(W) */
- /* bcopy(W) */
- /* vscroll(W) */
- /* fastfill(NX) */
- /*0x78*/ unsigned int pad6[2];
- /* Reserved */
- /*0x80*/ unsigned int pad7[32];
- /* Reserved */
-
- /* Setup Unit's vertex state register */
-/*100*/ volatile unsigned int suvtx;
- /*104*/ unsigned int pad8[63];
- /* Reserved */
-
- /* Frame Buffer Control Registers */
- /*200*/ volatile unsigned int ppc;
- /* Pixel Processor Control */
- /*204*/ volatile unsigned int wid;
- /* Current WID */
- /*208*/ volatile unsigned int fg;
- /* FG data */
- /*20c*/ volatile unsigned int bg;
- /* BG data */
- /*210*/ volatile unsigned int consty;
- /* Constant Y */
- /*214*/ volatile unsigned int constz;
- /* Constant Z */
- /*218*/ volatile unsigned int xclip;
- /* X Clip */
- /*21c*/ volatile unsigned int dcss;
- /* Depth Cue Scale Slope */
- /*220*/ volatile unsigned int vclipmin;
- /* Viewclip XY Min Bounds */
- /*224*/ volatile unsigned int vclipmax;
- /* Viewclip XY Max Bounds */
- /*228*/ volatile unsigned int vclipzmin;
- /* Viewclip Z Min Bounds */
- /*22c*/ volatile unsigned int vclipzmax;
- /* Viewclip Z Max Bounds */
- /*230*/ volatile unsigned int dcsf;
- /* Depth Cue Scale Front Bound */
- /*234*/ volatile unsigned int dcsb;
- /* Depth Cue Scale Back Bound */
- /*238*/ volatile unsigned int dczf;
- /* Depth Cue Z Front */
- /*23c*/ volatile unsigned int dczb;
- /* Depth Cue Z Back */
- /*240*/ unsigned int pad9;
- /* Reserved */
- /*244*/ volatile unsigned int blendc;
- /* Alpha Blend Control */
- /*248*/ volatile unsigned int blendc1;
- /* Alpha Blend Color 1 */
- /*24c*/ volatile unsigned int blendc2;
- /* Alpha Blend Color 2 */
- /*250*/ volatile unsigned int fbramitc;
- /* FB RAM Interleave Test Control */
- /*254*/ volatile unsigned int fbc;
- /* Frame Buffer Control */
- /*258*/ volatile unsigned int rop;
- /* Raster OPeration */
- /*25c*/ volatile unsigned int cmp;
- /* Frame Buffer Compare */
- /*260*/ volatile unsigned int matchab;
- /* Buffer AB Match Mask */
- /*264*/ volatile unsigned int matchc;
- /* Buffer C(YZ) Match Mask */
- /*268*/ volatile unsigned int magnab;
- /* Buffer AB Magnitude Mask */
- /*26c*/ volatile unsigned int magnc;
- /* Buffer C(YZ) Magnitude Mask */
- /*270*/ volatile unsigned int fbcfg0;
- /* Frame Buffer Config 0 */
- /*274*/ volatile unsigned int fbcfg1;
- /* Frame Buffer Config 1 */
- /*278*/ volatile unsigned int fbcfg2;
- /* Frame Buffer Config 2 */
- /*27c*/ volatile unsigned int fbcfg3;
- /* Frame Buffer Config 3 */
- /*280*/ volatile unsigned int ppcfg;
- /* Pixel Processor Config */
- /*284*/ volatile unsigned int pick;
- /* Picking Control */
- /*288*/ volatile unsigned int fillmode;
- /* FillMode */
- /*28c*/ volatile unsigned int fbramwac;
- /* FB RAM Write Address Control */
- /*290*/ volatile unsigned int pmask;
- /* RGB PlaneMask */
- /*294*/ volatile unsigned int xpmask;
- /* X PlaneMask */
- /*298*/ volatile unsigned int ypmask;
- /* Y PlaneMask */
- /*29c*/ volatile unsigned int zpmask;
- /* Z PlaneMask */
- /*2a0*/ ffb_auxclip auxclip[4];
- /* Auxilliary Viewport Clip */
-
- /* New 3dRAM III support regs */
-/*2c0*/ volatile unsigned int rawblend2;
-/*2c4*/ volatile unsigned int rawpreblend;
-/*2c8*/ volatile unsigned int rawstencil;
-/*2cc*/ volatile unsigned int rawstencilctl;
-/*2d0*/ volatile unsigned int threedram1;
-/*2d4*/ volatile unsigned int threedram2;
-/*2d8*/ volatile unsigned int passin;
-/*2dc*/ volatile unsigned int rawclrdepth;
-/*2e0*/ volatile unsigned int rawpmask;
-/*2e4*/ volatile unsigned int rawcsrc;
-/*2e8*/ volatile unsigned int rawmatch;
-/*2ec*/ volatile unsigned int rawmagn;
-/*2f0*/ volatile unsigned int rawropblend;
-/*2f4*/ volatile unsigned int rawcmp;
-/*2f8*/ volatile unsigned int rawwac;
-/*2fc*/ volatile unsigned int fbramid;
-
- /*300*/ volatile unsigned int drawop;
- /* Draw OPeration */
- /*304*/ unsigned int pad10[2];
- /* Reserved */
- /*30c*/ volatile unsigned int lpat;
- /* Line Pattern control */
- /*310*/ unsigned int pad11;
- /* Reserved */
- /*314*/ volatile unsigned int fontxy;
- /* XY Font coordinate */
- /*318*/ volatile unsigned int fontw;
- /* Font Width */
- /*31c*/ volatile unsigned int fontinc;
- /* Font Increment */
- /*320*/ volatile unsigned int font;
- /* Font bits */
- /*324*/ unsigned int pad12[3];
- /* Reserved */
-/*330*/ volatile unsigned int blend2;
-/*334*/ volatile unsigned int preblend;
-/*338*/ volatile unsigned int stencil;
-/*33c*/ volatile unsigned int stencilctl;
-
- /*340*/ unsigned int pad13[4];
- /* Reserved */
- /*350*/ volatile unsigned int dcss1;
- /* Depth Cue Scale Slope 1 */
- /*354*/ volatile unsigned int dcss2;
- /* Depth Cue Scale Slope 2 */
- /*358*/ volatile unsigned int dcss3;
- /* Depth Cue Scale Slope 3 */
-/*35c*/ volatile unsigned int widpmask;
-/*360*/ volatile unsigned int dcs2;
-/*364*/ volatile unsigned int dcs3;
-/*368*/ volatile unsigned int dcs4;
- /*36c*/ unsigned int pad14;
- /* Reserved */
-/*370*/ volatile unsigned int dcd2;
-/*374*/ volatile unsigned int dcd3;
-/*378*/ volatile unsigned int dcd4;
- /*37c*/ unsigned int pad15;
- /* Reserved */
- /*380*/ volatile unsigned int pattern[32];
- /* area Pattern */
- /*400*/ unsigned int pad16[8];
- /* Reserved */
- /*420*/ volatile unsigned int reset;
- /* chip RESET */
- /*424*/ unsigned int pad17[247];
- /* Reserved */
- /*800*/ volatile unsigned int devid;
- /* Device ID */
- /*804*/ unsigned int pad18[63];
- /* Reserved */
- /*900*/ volatile unsigned int ucsr;
- /* User Control & Status Register */
- /*904*/ unsigned int pad19[31];
- /* Reserved */
- /*980*/ volatile unsigned int mer;
- /* Mode Enable Register */
- /*984*/ unsigned int pad20[1439];
- /* Reserved */
-} ffb_fbc, *ffb_fbcPtr;
-
-struct ffb_hw_context {
- int is_2d_only;
-
- unsigned int ppc;
- unsigned int wid;
- unsigned int fg;
- unsigned int bg;
- unsigned int consty;
- unsigned int constz;
- unsigned int xclip;
- unsigned int dcss;
- unsigned int vclipmin;
- unsigned int vclipmax;
- unsigned int vclipzmin;
- unsigned int vclipzmax;
- unsigned int dcsf;
- unsigned int dcsb;
- unsigned int dczf;
- unsigned int dczb;
- unsigned int blendc;
- unsigned int blendc1;
- unsigned int blendc2;
- unsigned int fbc;
- unsigned int rop;
- unsigned int cmp;
- unsigned int matchab;
- unsigned int matchc;
- unsigned int magnab;
- unsigned int magnc;
- unsigned int pmask;
- unsigned int xpmask;
- unsigned int ypmask;
- unsigned int zpmask;
- unsigned int auxclip0min;
- unsigned int auxclip0max;
- unsigned int auxclip1min;
- unsigned int auxclip1max;
- unsigned int auxclip2min;
- unsigned int auxclip2max;
- unsigned int auxclip3min;
- unsigned int auxclip3max;
- unsigned int drawop;
- unsigned int lpat;
- unsigned int fontxy;
- unsigned int fontw;
- unsigned int fontinc;
- unsigned int area_pattern[32];
- unsigned int ucsr;
- unsigned int stencil;
- unsigned int stencilctl;
- unsigned int dcss1;
- unsigned int dcss2;
- unsigned int dcss3;
- unsigned int dcs2;
- unsigned int dcs3;
- unsigned int dcs4;
- unsigned int dcd2;
- unsigned int dcd3;
- unsigned int dcd4;
- unsigned int mer;
-};
-
-#define FFB_MAX_CTXS 32
-
-enum ffb_chip_type {
- ffb1_prototype = 0, /* Early pre-FCS FFB */
- ffb1_standard, /* First FCS FFB, 100Mhz UPA, 66MHz gclk */
- ffb1_speedsort, /* Second FCS FFB, 100Mhz UPA, 75MHz gclk */
- ffb2_prototype, /* Early pre-FCS vertical FFB2 */
- ffb2_vertical, /* First FCS FFB2/vertical, 100Mhz UPA, 100MHZ gclk,
- 75(SingleBuffer)/83(DoubleBuffer) MHz fclk */
- ffb2_vertical_plus, /* Second FCS FFB2/vertical, same timings */
- ffb2_horizontal, /* First FCS FFB2/horizontal, same timings as FFB2/vert */
- ffb2_horizontal_plus, /* Second FCS FFB2/horizontal, same timings */
- afb_m3, /* FCS Elite3D, 3 float chips */
- afb_m6 /* FCS Elite3D, 6 float chips */
-};
-
-typedef struct ffb_dev_priv {
- /* Misc software state. */
- int prom_node;
- enum ffb_chip_type ffb_type;
- u64 card_phys_base;
- struct miscdevice miscdev;
-
- /* Controller registers. */
- ffb_fbcPtr regs;
-
- /* Context table. */
- struct ffb_hw_context *hw_state[FFB_MAX_CTXS];
-} ffb_dev_priv_t;
-
-extern unsigned long ffb_get_unmapped_area(struct file *filp,
- unsigned long hint,
- unsigned long len,
- unsigned long pgoff,
- unsigned long flags);
-extern void ffb_set_context_ioctls(void);
-extern drm_ioctl_desc_t DRM(ioctls)[];
-
-extern int ffb_driver_context_switch(drm_device_t * dev, int old, int new);
}
-static unsigned int gs_baudrates[] = {
- 0, 50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800,
- 9600, 19200, 38400, 57600, 115200, 230400, 460800, 921600
-};
-
-
void gs_set_termios (struct tty_struct * tty,
struct ktermios * old_termios)
{
baudrate = tty_get_baud_rate(tty);
- baudrate = gs_baudrates[baudrate];
if ((tiosp->c_cflag & CBAUD) == B38400) {
if ( (port->flags & ASYNC_SPD_MASK) == ASYNC_SPD_HI)
baudrate = 57600;
* March 2001: Ported from 2.0.34 by Liam Davies
*
*/
-
-#define RTC_IO_EXTENT 0x10 /*Only really two ports, but... */
-
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/miscdevice.h>
#include "lcd.h"
-static DEFINE_SPINLOCK(lcd_lock);
-
static int lcd_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg);
{
if (!valid_mmap_phys_addr_range(pgoff, len))
return (unsigned long) -EINVAL;
- return pgoff;
+ return pgoff << PAGE_SHIFT;
}
/* can't do an in-place private mapping if there's no MMU */
* (use |'ed TIOCM_RNG/DSR/CD/CTS for masking)
* Caller should use TIOCGICOUNT to see which one it was
*/
- case TIOCMIWAIT: {
- DECLARE_WAITQUEUE(wait, current);
- int ret;
+ case TIOCMIWAIT:
+ spin_lock_irqsave(&info->slock, flags);
+ cnow = info->icount; /* note the counters on entry */
+ spin_unlock_irqrestore(&info->slock, flags);
+
+ wait_event_interruptible(info->delta_msr_wait, ({
+ cprev = cnow;
spin_lock_irqsave(&info->slock, flags);
- cprev = info->icount; /* note the counters on entry */
+ cnow = info->icount; /* atomic copy */
spin_unlock_irqrestore(&info->slock, flags);
- add_wait_queue(&info->delta_msr_wait, &wait);
- while (1) {
- spin_lock_irqsave(&info->slock, flags);
- cnow = info->icount; /* atomic copy */
- spin_unlock_irqrestore(&info->slock, flags);
-
- set_current_state(TASK_INTERRUPTIBLE);
- if (((arg & TIOCM_RNG) &&
- (cnow.rng != cprev.rng)) ||
- ((arg & TIOCM_DSR) &&
- (cnow.dsr != cprev.dsr)) ||
- ((arg & TIOCM_CD) &&
- (cnow.dcd != cprev.dcd)) ||
- ((arg & TIOCM_CTS) &&
- (cnow.cts != cprev.cts))) {
- ret = 0;
- break;
- }
- /* see if a signal did it */
- if (signal_pending(current)) {
- ret = -ERESTARTSYS;
- break;
- }
- cprev = cnow;
- }
- current->state = TASK_RUNNING;
- remove_wait_queue(&info->delta_msr_wait, &wait);
- break;
- }
- /* NOTREACHED */
+ ((arg & TIOCM_RNG) && (cnow.rng != cprev.rng)) ||
+ ((arg & TIOCM_DSR) && (cnow.dsr != cprev.dsr)) ||
+ ((arg & TIOCM_CD) && (cnow.dcd != cprev.dcd)) ||
+ ((arg & TIOCM_CTS) && (cnow.cts != cprev.cts));
+ }));
+ break;
/*
* Get counter of input serial line interrupts (DCD,RI,DSR,CTS)
* Return: write counters to the user passed counter struct
* (use |'ed TIOCM_RNG/DSR/CD/CTS for masking)
* Caller should use TIOCGICOUNT to see which one it was
*/
- case TIOCMIWAIT: {
- DECLARE_WAITQUEUE(wait, current);
- int ret;
+ case TIOCMIWAIT:
spin_lock_irqsave(&info->slock, flags);
- cprev = info->icount; /* note the counters on entry */
+ cnow = info->icount; /* note the counters on entry */
spin_unlock_irqrestore(&info->slock, flags);
- add_wait_queue(&info->delta_msr_wait, &wait);
- while (1) {
+ wait_event_interruptible(info->delta_msr_wait, ({
+ cprev = cnow;
spin_lock_irqsave(&info->slock, flags);
cnow = info->icount; /* atomic copy */
spin_unlock_irqrestore(&info->slock, flags);
- set_current_state(TASK_INTERRUPTIBLE);
- if (((arg & TIOCM_RNG) &&
- (cnow.rng != cprev.rng)) ||
- ((arg & TIOCM_DSR) &&
- (cnow.dsr != cprev.dsr)) ||
- ((arg & TIOCM_CD) &&
- (cnow.dcd != cprev.dcd)) ||
- ((arg & TIOCM_CTS) &&
- (cnow.cts != cprev.cts))) {
- ret = 0;
- break;
- }
- /* see if a signal did it */
- if (signal_pending(current)) {
- ret = -ERESTARTSYS;
- break;
- }
- cprev = cnow;
- }
- current->state = TASK_RUNNING;
- remove_wait_queue(&info->delta_msr_wait, &wait);
+ ((arg & TIOCM_RNG) && (cnow.rng != cprev.rng)) ||
+ ((arg & TIOCM_DSR) && (cnow.dsr != cprev.dsr)) ||
+ ((arg & TIOCM_CD) && (cnow.dcd != cprev.dcd)) ||
+ ((arg & TIOCM_CTS) && (cnow.cts != cprev.cts));
+ }));
break;
- }
- /* NOTREACHED */
/*
* Get counter of input serial line interrupts (DCD,RI,DSR,CTS)
* Return: write counters to the user passed counter struct
port->mon_data.rxcnt += cnt;
port->mon_data.up_rxcnt += cnt;
+ /*
+ * We are called from an interrupt context with &port->slock
+ * being held. Drop it temporarily in order to prevent
+ * recursive locking.
+ */
+ spin_unlock(&port->slock);
tty_flip_buffer_push(tty);
+ spin_lock(&port->slock);
}
static void mxser_transmit_chars(struct mxser_port *port)
read_unlock(&tasklist_lock);
tty->flags = 0;
+ put_pid(tty->session);
+ put_pid(tty->pgrp);
tty->session = NULL;
tty->pgrp = NULL;
tty->ctrl_status = 0;
{
struct pid *old_pgrp;
if (tty) {
+ /* We should not have a session or pgrp to here but.... */
+ put_pid(tty->session);
+ put_pid(tty->pgrp);
tty->session = get_pid(task_session(tsk));
tty->pgrp = get_pid(task_pgrp(tsk));
}
return -ENOMEM;
memset(vc, 0, sizeof(*vc));
vc_cons[currcons].d = vc;
+ INIT_WORK(&vc_cons[currcons].SAK_work, vc_SAK);
visual_init(vc, currcons, 1);
if (!*vc->vc_uni_pagedir_loc)
con_set_default_unimap(vc);
release_console_sem();
}
-void set_console(int nr)
+int set_console(int nr)
{
+ struct vc_data *vc = vc_cons[fg_console].d;
+
+ if (!vc_cons_allocated(nr) || vt_dont_switch ||
+ (vc->vt_mode.mode == VT_AUTO && vc->vc_mode == KD_GRAPHICS)) {
+
+ /*
+ * Console switch will fail in console_callback() or
+ * change_console() so there is no point scheduling
+ * the callback
+ *
+ * Existing set_console() users don't check the return
+ * value so this shouldn't break anything
+ */
+ return -EINVAL;
+ }
+
want_console = nr;
schedule_console_callback();
+
+ return 0;
}
struct tty_driver *console_driver;
#include <linux/kbd_diacr.h>
#include <linux/selection.h>
-static char vt_dont_switch;
+char vt_dont_switch;
extern struct tty_driver *console_driver;
#define VT_IS_IN_USE(i) (console_driver->ttys[i] && console_driver->ttys[i]->count)
add_wait_queue(&vt_activate_queue, &wait);
for (;;) {
- set_current_state(TASK_INTERRUPTIBLE);
retval = 0;
- if (vt == fg_console)
+
+ /*
+ * Synchronize with redraw_screen(). By acquiring the console
+ * semaphore we make sure that the console switch is completed
+ * before we return. If we didn't wait for the semaphore, we
+ * could return at a point where fg_console has already been
+ * updated, but the console switch hasn't been completed.
+ */
+ acquire_console_sem();
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (vt == fg_console) {
+ release_console_sem();
break;
+ }
+ release_console_sem();
retval = -EINTR;
if (signal_pending(current))
break;
config I8XX_TCO
tristate "Intel i8xx TCO Timer/Watchdog"
depends on WATCHDOG && (X86 || IA64) && PCI
+ default n
---help---
Hardware driver for the TCO timer built into the Intel 82801
I/O Controller Hub family. The TCO (Total Cost of Ownership)
{
void __user *argp = (void __user *)arg;
int __user *p = argp;
- switch(cmd){
- case WDIOC_GETSUPPORT:
- if (copy_to_user(argp, &zf_info, sizeof(zf_info)))
- return -EFAULT;
- break;
+ switch (cmd) {
+ case WDIOC_GETSUPPORT:
+ if (copy_to_user(argp, &zf_info, sizeof(zf_info)))
+ return -EFAULT;
+ break;
- case WDIOC_GETSTATUS:
- return put_user(0, p);
+ case WDIOC_GETSTATUS:
+ return put_user(0, p);
- case WDIOC_KEEPALIVE:
- zf_ping(0);
- break;
+ case WDIOC_KEEPALIVE:
+ zf_ping(0);
+ break;
- default:
- return -ENOTTY;
+ default:
+ return -ENOTTY;
}
return 0;
static inline void acpi_pm_need_workaround(void)
{
clocksource_acpi_pm.read = acpi_pm_read_slow;
- clocksource_acpi_pm.rating = 110;
+ clocksource_acpi_pm.rating = 120;
}
/*
{
unsigned int cpu = sys_dev->id;
int retval;
+
+ if (cpu_is_offline(cpu))
+ return 0;
+
if (unlikely(lock_policy_rwsem_write(cpu)))
BUG();
chan->client = NULL;
kref_put(&chan->device->refcount, dma_async_device_cleanup);
}
+EXPORT_SYMBOL(dma_chan_cleanup);
static void dma_chan_free_rcu(struct rcu_head *rcu)
{
return client;
}
+EXPORT_SYMBOL(dma_async_client_register);
/**
* dma_async_client_unregister - unregister a client and free the &dma_client
kfree(client);
dma_chans_rebalance();
}
+EXPORT_SYMBOL(dma_async_client_unregister);
/**
* dma_async_client_chan_request - request DMA channels
client->chans_desired = number;
dma_chans_rebalance();
}
+EXPORT_SYMBOL(dma_async_client_chan_request);
/**
* dma_async_device_register - registers DMA devices found
return 0;
}
+EXPORT_SYMBOL(dma_async_device_register);
/**
* dma_async_device_cleanup - function called when all references are released
kref_put(&device->refcount, dma_async_device_cleanup);
wait_for_completion(&device->done);
}
+EXPORT_SYMBOL(dma_async_device_unregister);
static int __init dma_bus_init(void)
{
mutex_init(&dma_list_mutex);
return class_register(&dma_devclass);
}
-
subsys_initcall(dma_bus_init);
-EXPORT_SYMBOL(dma_async_client_register);
-EXPORT_SYMBOL(dma_async_client_unregister);
-EXPORT_SYMBOL(dma_async_client_chan_request);
-EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf);
-EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg);
-EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg);
-EXPORT_SYMBOL(dma_async_memcpy_complete);
-EXPORT_SYMBOL(dma_async_memcpy_issue_pending);
-EXPORT_SYMBOL(dma_async_device_register);
-EXPORT_SYMBOL(dma_async_device_unregister);
-EXPORT_SYMBOL(dma_chan_cleanup);
/* There is only *one* pci_eisa device per machine, right ? */
static struct eisa_root_device pci_eisa_root;
-static int __devinit pci_eisa_init (struct pci_dev *pdev,
- const struct pci_device_id *ent)
+static int __init pci_eisa_init(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
{
int rc;
#include <asm/byteorder.h>
#include <linux/input.h>
#include <linux/wait.h>
+#include <linux/vmalloc.h>
#include <linux/hid.h>
#include <linux/hiddev.h>
memcpy(device->rdesc, start, size);
device->rsize = size;
- if (!(parser = kzalloc(sizeof(struct hid_parser), GFP_KERNEL))) {
+ if (!(parser = vmalloc(sizeof(struct hid_parser)))) {
kfree(device->rdesc);
kfree(device->collection);
kfree(device);
return NULL;
}
+ memset(parser, 0, sizeof(struct hid_parser));
parser->device = device;
end = start + size;
if (item.format != HID_ITEM_FORMAT_SHORT) {
dbg("unexpected long global item");
hid_free_device(device);
- kfree(parser);
+ vfree(parser);
return NULL;
}
dbg("item %u %u %u %u parsing failed\n",
item.format, (unsigned)item.size, (unsigned)item.type, (unsigned)item.tag);
hid_free_device(device);
- kfree(parser);
+ vfree(parser);
return NULL;
}
if (parser->collection_stack_ptr) {
dbg("unbalanced collection at end of report description");
hid_free_device(device);
- kfree(parser);
+ vfree(parser);
return NULL;
}
if (parser->local.delimiter_depth) {
dbg("unbalanced delimiter at end of report description");
hid_free_device(device);
- kfree(parser);
+ vfree(parser);
return NULL;
}
- kfree(parser);
+ vfree(parser);
return device;
}
}
dbg("item fetching failed at offset %d\n", (int)(end - start));
hid_free_device(device);
- kfree(parser);
+ vfree(parser);
return NULL;
}
EXPORT_SYMBOL_GPL(hid_parse_report);
report += offset >> 3; /* adjust byte index */
offset &= 7; /* now only need bit offset into one byte */
- x = get_unaligned((u64 *) report);
- x = le64_to_cpu(x);
+ x = le64_to_cpu(get_unaligned((__le64 *) report));
x = (x >> offset) & ((1ULL << n) - 1); /* extract bit field */
return (u32) x;
}
*/
static __inline__ void implement(__u8 *report, unsigned offset, unsigned n, __u32 value)
{
- u64 x;
+ __le64 x;
u64 m = (1ULL << n) - 1;
WARN_ON(n > 32);
report += offset >> 3;
offset &= 7;
- x = get_unaligned((u64 *)report);
+ x = get_unaligned((__le64 *)report);
x &= cpu_to_le64(~(m << offset));
x |= cpu_to_le64(((u64) value) << offset);
- put_unaligned(x, (u64 *) report);
+ put_unaligned(x, (__le64 *) report);
}
/*
unsigned size = field->report_size;
unsigned n;
- /* make sure the unused bits in the last byte are zeros */
- if (count > 0 && size > 0)
- data[(offset+count*size-1)/8] = 0;
-
for (n = 0; n < count; n++) {
if (field->logical_minimum < 0) /* signed values */
implement(data, offset + n * size, size, s32ton(field->value[n], size));
if (size < rsize) {
dbg("report %d is too short, (%d < %d)", report->id, size, rsize);
- return -1;
+ memset(data + size, 0, rsize - size);
}
if ((hid->claimed & HID_CLAIMED_HIDDEV) && hid->hiddev_report_event)
config SENSORS_W83793
tristate "Winbond W83793"
depends on HWMON && I2C && EXPERIMENTAL
+ select HWMON_VID
help
If you say yes here you get support for the Winbond W83793
hardware monitoring chip.
* ISA constants
*/
-#define REGION_ALIGNMENT ~7
-#define REGION_OFFSET 5
-#define REGION_LENGTH 2
+#define IOREGION_ALIGNMENT ~7
+#define IOREGION_OFFSET 5
+#define IOREGION_LENGTH 2
#define ADDR_REG_OFFSET 5
#define DATA_REG_OFFSET 6
break;
case 4:
reg = (w83627ehf_read_value(client, W83627EHF_REG_DIODE) & 0x73)
- | ((data->fan_div[4] & 0x03) << 3)
+ | ((data->fan_div[4] & 0x03) << 2)
| ((data->fan_div[4] & 0x04) << 5);
w83627ehf_write_value(client, W83627EHF_REG_DIODE, reg);
break;
time */
if (data->fan[i] == 0xff
&& data->fan_div[i] < 0x07) {
- dev_dbg(&client->dev, "Increasing fan %d "
+ dev_dbg(&client->dev, "Increasing fan%d "
"clock divider from %u to %u\n",
- i, div_from_reg(data->fan_div[i]),
+ i + 1, div_from_reg(data->fan_div[i]),
div_from_reg(data->fan_div[i] + 1));
data->fan_div[i]++;
w83627ehf_write_fan_div(client, i);
u8 fan4pin, fan5pin;
int i, err = 0;
- if (!request_region(address + REGION_OFFSET, REGION_LENGTH,
+ if (!request_region(address + IOREGION_OFFSET, IOREGION_LENGTH,
w83627ehf_driver.driver.name)) {
err = -EBUSY;
goto exit;
exit_free:
kfree(data);
exit_release:
- release_region(address + REGION_OFFSET, REGION_LENGTH);
+ release_region(address + IOREGION_OFFSET, IOREGION_LENGTH);
exit:
return err;
}
if ((err = i2c_detach_client(client)))
return err;
- release_region(client->addr + REGION_OFFSET, REGION_LENGTH);
+ release_region(client->addr + IOREGION_OFFSET, IOREGION_LENGTH);
kfree(data);
return 0;
superio_select(W83627EHF_LD_HWM);
val = (superio_inb(SIO_REG_ADDR) << 8)
| superio_inb(SIO_REG_ADDR + 1);
- *addr = val & REGION_ALIGNMENT;
+ *addr = val & IOREGION_ALIGNMENT;
if (*addr == 0) {
superio_exit();
return -ENODEV;
config I2C_PASEMI
tristate "PA Semi SMBus interface"
-# depends on PPC_PASEMI && I2C && PCI
- depends on I2C && PCI
+ depends on PPC_PASEMI && I2C && PCI
help
Supports the PA Semi PWRficient on-chip SMBus interfaces.
break;
case I2C_SMBUS_BLOCK_PROC_CALL:
- len = min_t(u8, data->block[0], 31);
+ len = min_t(u8, data->block[0],
+ I2C_SMBUS_BLOCK_MAX - 1);
amd_ec_write(smbus, AMD_SMB_CMD, command);
amd_ec_write(smbus, AMD_SMB_BCNT, len);
for (i = 0; i < len; i++)
int command, int hwpec);
static unsigned long i801_smba;
+static unsigned char i801_original_hstcfg;
static struct pci_driver i801_driver;
static struct pci_dev *I801_dev;
static int isich4;
}
pci_read_config_byte(I801_dev, SMBHSTCFG, &temp);
+ i801_original_hstcfg = temp;
temp &= ~SMBHSTCFG_I2C_EN; /* SMBus timing */
if (!(temp & SMBHSTCFG_HST_EN)) {
dev_info(&dev->dev, "Enabling SMBus device\n");
static void __devexit i801_remove(struct pci_dev *dev)
{
i2c_del_adapter(&i801_adapter);
+ pci_write_config_byte(I801_dev, SMBHSTCFG, i801_original_hstcfg);
pci_release_region(dev, SMBBAR);
/*
* do not call pci_disable_device(dev) since it can cause hard hangs on
*/
}
+#ifdef CONFIG_PM
+static int i801_suspend(struct pci_dev *dev, pm_message_t mesg)
+{
+ pci_save_state(dev);
+ pci_write_config_byte(dev, SMBHSTCFG, i801_original_hstcfg);
+ pci_set_power_state(dev, pci_choose_state(dev, mesg));
+ return 0;
+}
+
+static int i801_resume(struct pci_dev *dev)
+{
+ pci_set_power_state(dev, PCI_D0);
+ pci_restore_state(dev);
+ return pci_enable_device(dev);
+}
+#else
+#define i801_suspend NULL
+#define i801_resume NULL
+#endif
+
static struct pci_driver i801_driver = {
.name = "i801_smbus",
.id_table = i801_ids,
.probe = i801_probe,
.remove = __devexit_p(i801_remove),
+ .suspend = i801_suspend,
+ .resume = i801_resume,
};
static int __init i2c_i801_init(void)
for (i = 0; i < msg->len - 1; i++)
TXFIFO_WR(smbus, msg->buf[i]);
- TXFIFO_WR(smbus, msg->buf[msg->len] |
+ TXFIFO_WR(smbus, msg->buf[msg->len-1] |
(stop ? MTXFIFO_STOP : 0));
}
rd = RXFIFO_RD(smbus);
len = min_t(u8, (rd & MRXFIFO_DATA_M),
I2C_SMBUS_BLOCK_MAX);
- TXFIFO_WR(smbus, (len + 1) | MTXFIFO_READ |
+ TXFIFO_WR(smbus, len | MTXFIFO_READ |
MTXFIFO_STOP);
} else {
len = min_t(u8, data->block[0], I2C_SMBUS_BLOCK_MAX);
rd = RXFIFO_RD(smbus);
len = min_t(u8, (rd & MRXFIFO_DATA_M),
I2C_SMBUS_BLOCK_MAX - len);
- TXFIFO_WR(smbus, (len + 1) | MTXFIFO_READ | MTXFIFO_STOP);
+ TXFIFO_WR(smbus, len | MTXFIFO_READ | MTXFIFO_STOP);
break;
default:
client->driver = &ds1374_driver;
ds1374_workqueue = create_singlethread_workqueue("ds1374");
+ if (!ds1374_workqueue) {
+ kfree(client);
+ return -ENOMEM; /* most expected reason */
+ }
if ((rc = i2c_attach_client(client)) != 0) {
kfree(client);
config IDE_MAX_HWIFS
int "Max IDE interfaces"
depends on ALPHA || SUPERH || IA64 || EMBEDDED
+ range 1 10
default 4
help
This is the maximum number of IDE hardware interfaces that will
---help---
There are two drivers for Serial ATA controllers.
- The main driver, "libata", exists inside the SCSI subsystem
- and supports most modern SATA controllers.
+ The main driver, "libata", uses the SCSI subsystem
+ and supports most modern SATA controllers. In order to use it
+ you may take a look at "Serial ATA (prod) and Parallel ATA
+ (experimental) drivers".
The IDE driver (which you are currently configuring) supports
a few first-generation SATA controllers.
Generally say N here.
-config IDEDMA_PCI_AUTO
- bool "Use PCI DMA by default when available"
- ---help---
- Prior to kernel version 2.1.112, Linux used to automatically use
- DMA for IDE drives and chipsets which support it. Due to concerns
- about a couple of cases where buggy hardware may have caused damage,
- the default is now to NOT use DMA automatically. To revert to the
- previous behaviour, say Y to this question.
-
- If you suspect your hardware is at all flakey, say N here.
- Do NOT email the IDE kernel people regarding this issue!
-
- It is normally safe to answer Y to this question unless your
- motherboard uses a VIA VP2 chipset, in which case you should say N.
-
config IDEDMA_ONLYDISK
bool "Enable DMA only for disks "
- depends on IDEDMA_PCI_AUTO
help
This is used if you know your ATAPI Devices are going to fail DMA
Transfers.
help
This driver adds support for Toshiba TC86C001 GOKU-S chip.
+config BLK_DEV_CELLEB
+ tristate "Toshiba's Cell Reference Set IDE support"
+ depends on PPC_CELLEB
+ help
+ This driver provides support for the built-in IDE controller on
+ Toshiba Cell Reference Board.
+ If unsure, say Y.
+
endif
config BLK_DEV_IDE_PMAC
to transfer data to and from memory. Saying Y is safe and improves
performance.
-config BLK_DEV_IDE_CELLEB
- bool "Toshiba's Cell Reference Set IDE support"
- depends on PPC_CELLEB
- help
- This driver provides support for the built-in IDE controller on
- Toshiba Cell Reference Board.
- If unsure, say Y.
-
config BLK_DEV_IDE_SWARM
tristate "IDE for Sibyte evaluation boards"
depends on SIBYTE_SB1xxx_SOC
Say Y here if you want to add DMA (Direct Memory Access) support to
the ICS IDE driver.
-config IDEDMA_ICS_AUTO
- bool "Use ICS DMA by default"
- depends on BLK_DEV_IDEDMA_ICS
- help
- Prior to kernel version 2.1.112, Linux used to automatically use
- DMA for IDE drives and chipsets which support it. Due to concerns
- about a couple of cases where buggy hardware may have caused damage,
- the default is now to NOT use DMA automatically. To revert to the
- previous behaviour, say Y to this question.
-
- If you suspect your hardware is at all flakey, say N here.
- Do NOT email the IDE kernel people regarding this issue!
-
config BLK_DEV_IDE_RAPIDE
tristate "RapIDE interface support"
depends on ARM && ARCH_ACORN
It is normally safe to answer Y; however, the default is N.
-config IDEDMA_AUTO
- def_bool IDEDMA_PCI_AUTO || IDEDMA_ICS_AUTO
-
endif
config BLK_DEV_HD_ONLY
# built-in only drivers from ppc/
ide-core-$(CONFIG_BLK_DEV_MPC8xx_IDE) += ppc/mpc8xx.o
ide-core-$(CONFIG_BLK_DEV_IDE_PMAC) += ppc/pmac.o
-ide-core-$(CONFIG_BLK_DEV_IDE_CELLEB) += ppc/scc_pata.o
# built-in only drivers from h8300/
ide-core-$(CONFIG_H8300) += h8300/ide-h8300.o
}
#ifdef CONFIG_BLK_DEV_IDEDMA_ICS
-
-#ifndef CONFIG_IDEDMA_ICS_AUTO
-#warning CONFIG_IDEDMA_ICS_AUTO=n support is obsolete, and will be removed soon.
-#endif
-
/*
* SG-DMA support.
*
static void icside_dma_init(ide_hwif_t *hwif)
{
- int autodma = 0;
-
-#ifdef CONFIG_IDEDMA_ICS_AUTO
- autodma = 1;
-#endif
-
printk(" %s: SG-DMA", hwif->name);
hwif->atapi_dma = 1;
hwif->dmatable_cpu = NULL;
hwif->dmatable_dma = 0;
hwif->speedproc = icside_set_speed;
- hwif->autodma = autodma;
+ hwif->autodma = 1;
hwif->ide_dma_check = icside_dma_check;
hwif->dma_host_off = icside_dma_host_off;
cdrom_saw_media_change (drive);
/*printk("%s: media changed\n",drive->name);*/
return 0;
+ } else if ((sense_key == ILLEGAL_REQUEST) &&
+ (rq->cmd[0] == GPCMD_START_STOP_UNIT)) {
+ /*
+ * Don't print error message for this condition--
+ * SFF8090i indicates that 5/24/00 is the correct
+ * response to a request to close the tray if the
+ * drive doesn't have that capability.
+ * cdrom_log_sense() knows this!
+ */
} else if (!(rq->cmd_flags & REQ_QUIET)) {
/* Otherwise, print an error. */
ide_dump_status(drive, "packet command error", stat);
switch(rc) {
case -1: /* DMA needs to be disabled */
hwif->dma_off_quietly(drive);
- return 0;
+ return -1;
case 0: /* DMA needs to be enabled */
return hwif->ide_dma_on(drive);
case 1: /* DMA setting cannot be changed */
if ((stat & DRQ_STAT) && rq_data_dir(rq) == READ && hwif->err_stops_fifo == 0)
try_to_flush_leftover_data(drive);
+ if (rq->errors >= ERROR_MAX || blk_noretry_request(rq)) {
+ ide_kill_rq(drive, rq);
+ return ide_stopped;
+ }
+
if (hwif->INB(IDE_STATUS_REG) & (BUSY_STAT|DRQ_STAT))
- /* force an abort */
- hwif->OUTB(WIN_IDLEIMMEDIATE, IDE_COMMAND_REG);
+ rq->errors |= ERROR_RESET;
- if (rq->errors >= ERROR_MAX || blk_noretry_request(rq))
- ide_kill_rq(drive, rq);
- else {
- if ((rq->errors & ERROR_RESET) == ERROR_RESET) {
- ++rq->errors;
- return ide_do_reset(drive);
- }
- if ((rq->errors & ERROR_RECAL) == ERROR_RECAL)
- drive->special.b.recalibrate = 1;
+ if ((rq->errors & ERROR_RESET) == ERROR_RESET) {
++rq->errors;
+ return ide_do_reset(drive);
}
+
+ if ((rq->errors & ERROR_RECAL) == ERROR_RECAL)
+ drive->special.b.recalibrate = 1;
+
+ ++rq->errors;
+
return ide_stopped;
}
if (!drive->special.all) {
ide_driver_t *drv;
+ /*
+ * We reset the drive so we need to issue a SETFEATURES.
+ * Do it _after_ do_special() restored device parameters.
+ */
+ if (drive->current_speed == 0xff)
+ ide_config_drive_speed(drive, drive->desired_speed);
+
if (rq->cmd_type == REQ_TYPE_ATA_CMD ||
rq->cmd_type == REQ_TYPE_ATA_TASK ||
rq->cmd_type == REQ_TYPE_ATA_TASKFILE)
#endif
/* so that ide_timer_expiry knows what to do */
hwgroup->sleeping = 1;
+ hwgroup->req_gen_timer = hwgroup->req_gen;
mod_timer(&hwgroup->timer, sleep);
/* we purposely leave hwgroup->busy==1
* while sleeping */
spin_lock_irqsave(&ide_lock, flags);
- if ((handler = hwgroup->handler) == NULL) {
+ if (((handler = hwgroup->handler) == NULL) ||
+ (hwgroup->req_gen != hwgroup->req_gen_timer)) {
/*
* Either a marginal timeout occurred
* (got the interrupt just as timer expired),
if ((wait = expiry(drive)) > 0) {
/* reset timer */
hwgroup->timer.expires = jiffies + wait;
+ hwgroup->req_gen_timer = hwgroup->req_gen;
add_timer(&hwgroup->timer);
spin_unlock_irqrestore(&ide_lock, flags);
return;
printk(KERN_ERR "%s: ide_intr: hwgroup->busy was 0 ??\n", drive->name);
}
hwgroup->handler = NULL;
+ hwgroup->req_gen++;
del_timer(&hwgroup->timer);
spin_unlock(&ide_lock);
if(!(drive->id->hw_config & 0x4000))
return 0;
#endif /* CONFIG_IDEDMA_IVB */
- if (!(drive->id->hw_config & 0x2000))
- return 0;
+ /*
+ * FIXME:
+ * - change master/slave IDENTIFY order
+ * - force bit13 (80c cable present) check
+ * (unless the slave device is pre-ATA3)
+ */
return 1;
}
hwgroup->handler = handler;
hwgroup->expiry = expiry;
hwgroup->timer.expires = jiffies + timeout;
+ hwgroup->req_gen_timer = hwgroup->req_gen;
add_timer(&hwgroup->timer);
}
hwgroup->handler = handler;
hwgroup->expiry = expiry;
hwgroup->timer.expires = jiffies + timeout;
+ hwgroup->req_gen_timer = hwgroup->req_gen;
add_timer(&hwgroup->timer);
hwif->OUTBSYNC(drive, cmd, IDE_COMMAND_REG);
/* Drive takes 400nS to respond, we must avoid the IRQ being
if (HWIF(drive)->pre_reset != NULL)
HWIF(drive)->pre_reset(drive);
+ if (drive->current_speed != 0xff)
+ drive->desired_speed = drive->current_speed;
+ drive->current_speed = 0xff;
}
/*
static int ide_scan_direction; /* THIS was formerly 2.2.x pci=reverse */
#endif
-#ifdef CONFIG_IDEDMA_AUTO
int noautodma = 0;
-#else
-int noautodma = 1;
-#endif
EXPORT_SYMBOL(noautodma);
static int set_using_dma (ide_drive_t *drive, int arg)
{
#ifdef CONFIG_BLK_DEV_IDEDMA
+ ide_hwif_t *hwif = drive->hwif;
+ int err = -EPERM;
+
if (!drive->id || !(drive->id->capability & 1))
- return -EPERM;
- if (HWIF(drive)->ide_dma_check == NULL)
- return -EPERM;
+ goto out;
+
+ if (hwif->ide_dma_check == NULL)
+ goto out;
+
+ err = -EBUSY;
+ if (ide_spin_wait_hwgroup(drive))
+ goto out;
+ /*
+ * set ->busy flag, unlock and let it ride
+ */
+ hwif->hwgroup->busy = 1;
+ spin_unlock_irq(&ide_lock);
+
+ err = 0;
+
if (arg) {
- if (ide_set_dma(drive))
- return -EIO;
- if (HWIF(drive)->ide_dma_on(drive)) return -EIO;
+ if (ide_set_dma(drive) || hwif->ide_dma_on(drive))
+ err = -EIO;
} else
ide_dma_off(drive);
- return 0;
+
+ /*
+ * lock, clear ->busy flag and unlock before leaving
+ */
+ spin_lock_irq(&ide_lock);
+ hwif->hwgroup->busy = 0;
+ spin_unlock_irq(&ide_lock);
+out:
+ return err;
#else
return -EPERM;
#endif
return "tape";
case ide_floppy:
return "floppy";
+ case ide_optical:
+ return "optical";
default:
return "UNKNOWN";
}
_auide_hwif *ahwif = &auide_hwif;
ide_hwif_t *hwif;
struct resource *res;
+ hw_regs_t *hw;
int ret = 0;
#if defined(CONFIG_BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA)
/* FIXME: This might possibly break PCMCIA IDE devices */
hwif = &ide_hwifs[pdev->id];
- hw_regs_t *hw = &hwif->hw;
+ hw = &hwif->hw;
hwif->irq = hw->irq = ahwif->irq;
hwif->chipset = ide_au1xxx;
obj-$(CONFIG_BLK_DEV_ALI15X3) += alim15x3.o
obj-$(CONFIG_BLK_DEV_AMD74XX) += amd74xx.o
obj-$(CONFIG_BLK_DEV_ATIIXP) += atiixp.o
+obj-$(CONFIG_BLK_DEV_CELLEB) += scc_pata.o
obj-$(CONFIG_BLK_DEV_CMD64X) += cmd64x.o
obj-$(CONFIG_BLK_DEV_CS5520) += cs5520.o
obj-$(CONFIG_BLK_DEV_CS5530) += cs5530.o
/* $Id: cmd64x.c,v 1.21 2000/01/30 23:23:16
*
- * linux/drivers/ide/pci/cmd64x.c Version 1.41 Feb 3, 2007
+ * linux/drivers/ide/pci/cmd64x.c Version 1.42 Feb 8, 2007
*
* cmd64x.c: Enable interrupts at initialization time on Ultra/PCI machines.
* Note, this driver is not used at all on other systems because
#endif /* defined(DISPLAY_CMD64X_TIMINGS) && defined(CONFIG_PROC_FS) */
+static u8 quantize_timing(int timing, int quant)
+{
+ return (timing + quant - 1) / quant;
+}
+
/*
* This routine writes the prepared setup/active/recovery counts
* for a drive into the cmd646 chipset registers to active them.
*/
static u8 cmd64x_tune_pio (ide_drive_t *drive, u8 mode_wanted)
{
- int setup_time, active_time, recovery_time;
- int clock_time, pio_mode, cycle_time;
- u8 recovery_count2, cycle_count;
- int setup_count, active_count, recovery_count;
- int bus_speed = system_bus_clock();
- ide_pio_data_t d;
+ int setup_time, active_time, cycle_time;
+ u8 cycle_count, setup_count, active_count, recovery_count;
+ u8 pio_mode;
+ int clock_time = 1000 / system_bus_clock();
+ ide_pio_data_t pio;
- pio_mode = ide_get_best_pio_mode(drive, mode_wanted, 5, &d);
- cycle_time = d.cycle_time;
+ pio_mode = ide_get_best_pio_mode(drive, mode_wanted, 5, &pio);
+ cycle_time = pio.cycle_time;
- /*
- * I copied all this complicated stuff from cmd640.c and made a few
- * minor changes. For now I am just going to pray that it is correct.
- */
setup_time = ide_pio_timings[pio_mode].setup_time;
active_time = ide_pio_timings[pio_mode].active_time;
- recovery_time = cycle_time - (setup_time + active_time);
- clock_time = 1000 / bus_speed;
- cycle_count = (cycle_time + clock_time - 1) / clock_time;
-
- setup_count = (setup_time + clock_time - 1) / clock_time;
- active_count = (active_time + clock_time - 1) / clock_time;
+ setup_count = quantize_timing( setup_time, clock_time);
+ cycle_count = quantize_timing( cycle_time, clock_time);
+ active_count = quantize_timing(active_time, clock_time);
- recovery_count = (recovery_time + clock_time - 1) / clock_time;
- recovery_count2 = cycle_count - (setup_count + active_count);
- if (recovery_count2 > recovery_count)
- recovery_count = recovery_count2;
+ recovery_count = cycle_count - active_count;
+ /* program_drive_counts() takes care of zero recovery cycles */
if (recovery_count > 16) {
active_count += recovery_count - 16;
recovery_count = 16;
}
if (active_count > 16)
- active_count = 16; /* maximum allowed by cmd646 */
+ active_count = 16; /* maximum allowed by cmd64x */
program_drive_counts (drive, setup_count, active_count, recovery_count);
cmdprintk("%s: PIO mode wanted %d, selected %d (%dns)%s, "
"clocks=%d/%d/%d\n",
drive->name, mode_wanted, pio_mode, cycle_time,
- d.overridden ? " (overriding vendor mode)" : "",
+ pio.overridden ? " (overriding vendor mode)" : "",
setup_count, active_count, recovery_count);
return pio_mode;
static struct pci_device_id delkin_cb_pci_tbl[] __devinitdata = {
{ 0x1145, 0xf021, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ { 0x1145, 0xf024, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{ 0, },
};
MODULE_DEVICE_TABLE(pci, delkin_cb_pci_tbl);
/*
- * linux/drivers/ide/pci/hpt366.c Version 1.01 Dec 23, 2006
+ * linux/drivers/ide/pci/hpt366.c Version 1.02 Apr 18, 2007
*
* Copyright (C) 1999-2003 Andre Hedrick <andre@linux-ide.org>
* Portions Copyright (C) 2001 Sun Microsystems, Inc.
* Portions Copyright (C) 2003 Red Hat Inc
- * Portions Copyright (C) 2005-2006 MontaVista Software, Inc.
+ * Portions Copyright (C) 2005-2007 MontaVista Software, Inc.
*
* Thanks to HighPoint Technologies for their assistance, and hardware.
* Special Thanks to Jon Burchmore in SanDiego for the deep pockets, his
.chip_type = HPT302N,
.max_mode = HPT302_ALLOW_ATA133_6 ? 4 : 3,
.dpll_clk = 77,
+ .settings = hpt37x_settings
};
static struct hpt_info hpt371n __devinitdata = {
return 0;
}
+/* If libata is configured, jmicron PCI quirk will configure it such
+ * that the SATA ports are in AHCI function while the PATA ports are
+ * in a separate IDE function. In such cases, match device class and
+ * attach only to IDE. If libata isn't configured, keep the old
+ * behavior for backward compatibility.
+ */
+#if defined(CONFIG_ATA) || defined(CONFIG_ATA_MODULE)
+#define JMB_CLASS PCI_CLASS_STORAGE_IDE << 8
+#define JMB_CLASS_MASK 0xffff00
+#else
+#define JMB_CLASS 0
+#define JMB_CLASS_MASK 0
+#endif
+
static struct pci_device_id jmicron_pci_tbl[] = {
- { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
- { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB363, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1},
- { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB365, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 2},
- { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB366, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 3},
- { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB368, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 4},
+ { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361,
+ PCI_ANY_ID, PCI_ANY_ID, JMB_CLASS, JMB_CLASS_MASK, 0},
+ { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB363,
+ PCI_ANY_ID, PCI_ANY_ID, JMB_CLASS, JMB_CLASS_MASK, 1},
+ { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB365,
+ PCI_ANY_ID, PCI_ANY_ID, JMB_CLASS, JMB_CLASS_MASK, 2},
+ { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB366,
+ PCI_ANY_ID, PCI_ANY_ID, JMB_CLASS, JMB_CLASS_MASK, 3},
+ { PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB368,
+ PCI_ANY_ID, PCI_ANY_ID, JMB_CLASS, JMB_CLASS_MASK, 4},
{ 0, },
};
printk(KERN_WARNING "%s reduced to Ultra33 mode.\n", drive->name);
}
- if (drive->media != ide_disk)
+ if (drive->media != ide_disk && drive->media != ide_cdrom)
return 0;
if (id->capability & 4) {
hwif->drives[0].autotune = hwif->drives[1].autotune = 1;
+ hwif->atapi_dma = 1;
hwif->ultra_mask = 0x7f;
hwif->mwdma_mask = 0x07;
}
}
}
-
-#ifndef CONFIG_IDEDMA_PCI_AUTO
-#warning CONFIG_IDEDMA_PCI_AUTO=n support is obsolete, and will be removed soon.
-#endif
-
#endif /* CONFIG_BLK_DEV_IDEDMA_PCI*/
/**
tristate "OHCI-DV I/O support (deprecated)"
depends on IEEE1394 && IEEE1394_OHCI1394
help
- The dv1394 driver will be removed from Linux in a future release.
- Its functionality is now provided by raw1394 together with libraries
- such as libiec61883.
+ The dv1394 driver is unsupported and may be removed from Linux in a
+ future release. Its functionality is now provided by raw1394 together
+ with libraries such as libiec61883.
config IEEE1394_RAWIO
tristate "Raw IEEE1394 I/O support"
int ret;
printk(KERN_WARNING
- "WARNING: The dv1394 driver is unsupported and will be removed "
- "from Linux soon. Use raw1394 instead.\n");
+ "NOTE: The dv1394 driver is unsupported and may be removed in a "
+ "future Linux release. Use raw1394 instead.\n");
cdev_init(&dv1394_cdev, &dv1394_fops);
dv1394_cdev.owner = THIS_MODULE;
}
SET_MODULE_OWNER(dev);
+#if 0
+ /* FIXME - Is this the correct parent device anyway? */
SET_NETDEV_DEV(dev, &host->device);
+#endif
priv = netdev_priv(dev);
u64 sge_cmd, ctx0, ctx1;
u64 base_addr;
struct t3_modify_qp_wr *wqe;
- struct sk_buff *skb = alloc_skb(sizeof(*wqe), GFP_KERNEL);
-
+ struct sk_buff *skb;
+ skb = alloc_skb(sizeof(*wqe), GFP_KERNEL);
if (!skb) {
PDBG("%s alloc_skb failed\n", __FUNCTION__);
return -ENOMEM;
err = cxio_hal_init_ctrl_cq(rdev_p);
if (err) {
PDBG("%s err %d initializing ctrl_cq\n", __FUNCTION__, err);
- return err;
+ goto err;
}
rdev_p->ctrl_qp.workq = dma_alloc_coherent(
&(rdev_p->rnic_info.pdev->dev),
GFP_KERNEL);
if (!rdev_p->ctrl_qp.workq) {
PDBG("%s dma_alloc_coherent failed\n", __FUNCTION__);
- return -ENOMEM;
+ err = -ENOMEM;
+ goto err;
}
pci_unmap_addr_set(&rdev_p->ctrl_qp, mapping,
rdev_p->ctrl_qp.dma_addr);
rdev_p->ctrl_qp.workq, 1 << T3_CTRL_QP_SIZE_LOG2);
skb->priority = CPL_PRIORITY_CONTROL;
return (cxgb3_ofld_send(rdev_p->t3cdev_p, skb));
+err:
+ kfree_skb(skb);
+ return err;
}
static int cxio_hal_destroy_ctrl_qp(struct cxio_rdev *rdev_p)
return 0;
}
+static int set_tcb_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
+{
+ struct cpl_set_tcb_rpl *rpl = cplhdr(skb);
+
+ if (rpl->status != CPL_ERR_NONE) {
+ printk(KERN_ERR MOD "Unexpected SET_TCB_RPL status %u "
+ "for tid %u\n", rpl->status, GET_TID(rpl));
+ }
+ return CPL_RET_BUF_DONE;
+}
+
int __init iwch_cm_init(void)
{
skb_queue_head_init(&rxq);
t3c_handlers[CPL_ABORT_REQ_RSS] = sched;
t3c_handlers[CPL_RDMA_TERMINATE] = sched;
t3c_handlers[CPL_RDMA_EC_STATUS] = sched;
+ t3c_handlers[CPL_SET_TCB_RPL] = set_tcb_rpl;
/*
* These are the real handlers that are called from a
php = to_iwch_pd(pd);
if (mr_rereg_mask & IB_MR_REREG_ACCESS)
mh.attr.perms = iwch_ib_to_tpt_access(acc);
- if (mr_rereg_mask & IB_MR_REREG_TRANS)
+ if (mr_rereg_mask & IB_MR_REREG_TRANS) {
ret = build_phys_page_list(buffer_list, num_phys_buf,
iova_start,
&total_size, &npages,
&shift, &page_list);
+ if (ret)
+ return ret;
+ }
ret = iwch_reregister_mem(rhp, php, &mh, shift, page_list, npages);
kfree(page_list);
static void queue_comp_task(struct ehca_cq *__cq);
static struct ehca_comp_pool* pool;
+#ifdef CONFIG_HOTPLUG_CPU
static struct notifier_block comp_pool_callback_nb;
+#endif
static inline void comp_event_callback(struct ehca_cq *cq)
{
}
+#ifdef CONFIG_HOTPLUG_CPU
static int comp_pool_callback(struct notifier_block *nfb,
unsigned long action,
void *hcpu)
return NOTIFY_OK;
}
+#endif
int ehca_create_comp_pool(void)
{
}
}
+#ifdef CONFIG_HOTPLUG_CPU
comp_pool_callback_nb.notifier_call = comp_pool_callback;
comp_pool_callback_nb.priority =0;
register_cpu_notifier(&comp_pool_callback_nb);
+#endif
printk(KERN_INFO "eHCA scaling code enabled\n");
if (!ehca_scaling_code)
return;
+#ifdef CONFIG_HOTPLUG_CPU
unregister_cpu_notifier(&comp_pool_callback_nb);
+#endif
for (i = 0; i < NR_CPUS; i++) {
if (cpu_online(i))
}
static void ipath_dma_free_coherent(struct ib_device *dev, size_t size,
- void *cpu_addr, dma_addr_t dma_handle)
+ void *cpu_addr, u64 dma_handle)
{
free_pages((unsigned long) cpu_addr, get_order(size));
}
return ret;
}
-static void remove_file(struct dentry *parent, char *name)
+static int remove_file(struct dentry *parent, char *name)
{
struct dentry *tmp;
+ int ret;
tmp = lookup_one_len(name, parent, strlen(name));
+ if (IS_ERR(tmp)) {
+ ret = PTR_ERR(tmp);
+ goto bail;
+ }
+
spin_lock(&dcache_lock);
spin_lock(&tmp->d_lock);
if (!(d_unhashed(tmp) && tmp->d_inode)) {
spin_unlock(&tmp->d_lock);
spin_unlock(&dcache_lock);
}
+
+ ret = 0;
+bail:
+ /*
+ * We don't expect clients to care about the return value, but
+ * it's there if they need it.
+ */
+ return ret;
}
static int remove_device_files(struct super_block *sb,
key = arbel_key_to_hw_index(fmr->ibmr.lkey);
key &= dev->limits.num_mpts - 1;
+ key = adjust_key(dev, key);
fmr->ibmr.lkey = fmr->ibmr.rkey = arbel_hw_index_to_key(key);
fmr->maps = 0;
}
mpts = mtts = 1 << i;
} else {
- mpts = dev->limits.num_mtt_segs;
- mtts = dev->limits.num_mpts;
+ mtts = dev->limits.num_mtt_segs;
+ mpts = dev->limits.num_mpts;
}
if (!mthca_is_memfree(dev) &&
skb_fill_page_desc(skb, i, page, 0, PAGE_SIZE);
mapping[i + 1] = ib_dma_map_page(priv->ca, skb_shinfo(skb)->frags[i].page,
- 0, PAGE_SIZE, DMA_TO_DEVICE);
+ 0, PAGE_SIZE, DMA_FROM_DEVICE);
if (unlikely(ib_dma_mapping_error(priv->ca, mapping[i + 1])))
goto partial_error;
}
skb->len, tx->mtu);
++priv->stats.tx_dropped;
++priv->stats.tx_errors;
- ipoib_cm_skb_too_long(dev, skb, tx->mtu - INFINIBAND_ALEN);
+ ipoib_cm_skb_too_long(dev, skb, tx->mtu - IPOIB_ENCAP_LEN);
return;
}
/* List if sorted by LRU, start from tail,
* stop when we see a recently used entry */
p = list_entry(priv->cm.passive_ids.prev, typeof(*p), list);
- if (time_after_eq(jiffies, p->jiffies + IPOIB_CM_RX_TIMEOUT))
+ if (time_before_eq(jiffies, p->jiffies + IPOIB_CM_RX_TIMEOUT))
break;
list_del_init(&p->list);
spin_unlock_irqrestore(&priv->lock, flags);
struct ipoib_tx_buf *tx_req;
u64 addr;
- if (unlikely(skb->len > priv->mcast_mtu + INFINIBAND_ALEN)) {
+ if (unlikely(skb->len > priv->mcast_mtu + IPOIB_ENCAP_LEN)) {
ipoib_warn(priv, "packet len %d (> %d) too long to send, dropping\n",
- skb->len, priv->mcast_mtu + INFINIBAND_ALEN);
+ skb->len, priv->mcast_mtu + IPOIB_ENCAP_LEN);
++priv->stats.tx_dropped;
++priv->stats.tx_errors;
ipoib_cm_skb_too_long(dev, skb, priv->mcast_mtu);
struct net_device *dev = path->dev;
struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ipoib_ah *ah = NULL;
- struct ipoib_neigh *neigh;
+ struct ipoib_neigh *neigh, *tn;
struct sk_buff_head skqueue;
struct sk_buff *skb;
unsigned long flags;
while ((skb = __skb_dequeue(&path->queue)))
__skb_queue_tail(&skqueue, skb);
- list_for_each_entry(neigh, &path->neigh_list, list) {
+ list_for_each_entry_safe(neigh, tn, &path->neigh_list, list) {
kref_get(&path->ah->ref);
neigh->ah = path->ah;
memcpy(&neigh->dgid.raw, &path->pathrec.dgid.raw,
queue_work(ipoib_workqueue, &priv->restart_task);
}
-static void ipoib_neigh_destructor(struct neighbour *n)
+static void ipoib_neigh_cleanup(struct neighbour *n)
{
struct ipoib_neigh *neigh;
struct ipoib_dev_priv *priv = netdev_priv(n->dev);
struct ipoib_ah *ah = NULL;
ipoib_dbg(priv,
- "neigh_destructor for %06x " IPOIB_GID_FMT "\n",
+ "neigh_cleanup for %06x " IPOIB_GID_FMT "\n",
IPOIB_QPN(n->ha),
IPOIB_GID_RAW_ARG(n->ha + 4));
static int ipoib_neigh_setup_dev(struct net_device *dev, struct neigh_parms *parms)
{
- parms->neigh_destructor = ipoib_neigh_destructor;
+ parms->neigh_cleanup = ipoib_neigh_cleanup;
return 0;
}
struct ipoib_dev_priv *priv = netdev_priv(dev);
int ret = 0;
+ if (test_and_clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags))
+ ib_sa_free_multicast(mcast->mc);
+
if (test_and_clear_bit(IPOIB_MCAST_FLAG_ATTACHED, &mcast->flags)) {
ipoib_dbg_mcast(priv, "leaving MGID " IPOIB_GID_FMT "\n",
IPOIB_GID_ARG(mcast->mcmember.mgid));
ipoib_warn(priv, "ipoib_mcast_detach failed (result = %d)\n", ret);
}
- if (test_and_clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags))
- ib_sa_free_multicast(mcast->mc);
-
return 0;
}
wait_queue_head_t wait; /* waitq for conn/disconn */
atomic_t post_recv_buf_count; /* posted rx count */
atomic_t post_send_buf_count; /* posted tx count */
- struct work_struct comperror_work; /* conn term sleepable ctx*/
char name[ISER_OBJECT_NAME_SIZE];
struct iser_page_vec *page_vec; /* represents SG to fmr maps*
* maps serialized as tx is*/
{
int deferred;
int is_rdma_aligned = 1;
+ struct iser_regd_buf *regd;
/* if we were reading, copy back to unaligned sglist,
* anyway dma_unmap and free the copy
}
if (iser_ctask->dir[ISER_DIR_IN]) {
- deferred = iser_regd_buff_release
- (&iser_ctask->rdma_regd[ISER_DIR_IN]);
+ regd = &iser_ctask->rdma_regd[ISER_DIR_IN];
+ deferred = iser_regd_buff_release(regd);
if (deferred) {
- iser_err("References remain for BUF-IN rdma reg\n");
- BUG();
+ iser_err("%d references remain for BUF-IN rdma reg\n",
+ atomic_read(®d->ref_count));
}
}
if (iser_ctask->dir[ISER_DIR_OUT]) {
- deferred = iser_regd_buff_release
- (&iser_ctask->rdma_regd[ISER_DIR_OUT]);
+ regd = &iser_ctask->rdma_regd[ISER_DIR_OUT];
+ deferred = iser_regd_buff_release(regd);
if (deferred) {
- iser_err("References remain for BUF-OUT rdma reg\n");
- BUG();
+ iser_err("%d references remain for BUF-OUT rdma reg\n",
+ atomic_read(®d->ref_count));
}
}
static void iser_cq_tasklet_fn(unsigned long data);
static void iser_cq_callback(struct ib_cq *cq, void *cq_context);
-static void iser_comp_error_worker(struct work_struct *work);
static void iser_cq_event_callback(struct ib_event *cause, void *context)
{
init_waitqueue_head(&ib_conn->wait);
atomic_set(&ib_conn->post_recv_buf_count, 0);
atomic_set(&ib_conn->post_send_buf_count, 0);
- INIT_WORK(&ib_conn->comperror_work, iser_comp_error_worker);
INIT_LIST_HEAD(&ib_conn->conn_list);
spin_lock_init(&ib_conn->lock);
return ret_val;
}
-static void iser_comp_error_worker(struct work_struct *work)
-{
- struct iser_conn *ib_conn =
- container_of(work, struct iser_conn, comperror_work);
-
- /* getting here when the state is UP means that the conn is being *
- * terminated asynchronously from the iSCSI layer's perspective. */
- if (iser_conn_state_comp_exch(ib_conn, ISER_CONN_UP,
- ISER_CONN_TERMINATING))
- iscsi_conn_failure(ib_conn->iser_conn->iscsi_conn,
- ISCSI_ERR_CONN_FAILED);
-
- /* complete the termination process if disconnect event was delivered *
- * note there are no more non completed posts to the QP */
- if (ib_conn->disc_evt_flag) {
- ib_conn->state = ISER_CONN_DOWN;
- wake_up_interruptible(&ib_conn->wait);
- }
-}
-
static void iser_handle_comp_error(struct iser_desc *desc)
{
struct iser_dto *dto = &desc->dto;
}
if (atomic_read(&ib_conn->post_recv_buf_count) == 0 &&
- atomic_read(&ib_conn->post_send_buf_count) == 0)
- schedule_work(&ib_conn->comperror_work);
+ atomic_read(&ib_conn->post_send_buf_count) == 0) {
+ /* getting here when the state is UP means that the conn is *
+ * being terminated asynchronously from the iSCSI layer's *
+ * perspective. */
+ if (iser_conn_state_comp_exch(ib_conn, ISER_CONN_UP,
+ ISER_CONN_TERMINATING))
+ iscsi_conn_failure(ib_conn->iser_conn->iscsi_conn,
+ ISCSI_ERR_CONN_FAILED);
+
+ /* complete the termination process if disconnect event was delivered *
+ * note there are no more non completed posts to the QP */
+ if (ib_conn->disc_evt_flag) {
+ ib_conn->state = ISER_CONN_DOWN;
+ wake_up_interruptible(&ib_conn->wait);
+ }
+ }
}
static void iser_cq_tasklet_fn(unsigned long data)
}
static struct device_driver ucb1400_ts_driver = {
+ .name = "ucb1400_ts",
.owner = THIS_MODULE,
.bus = &ac97_bus_type,
.probe = ucb1400_ts_probe,
#define USB_SX353_PRODUCT_ID 0x0022
/* table of devices that work with this driver */
-static struct usb_device_id gigaset_table [] = {
+static const struct usb_device_id gigaset_table [] = {
{ USB_DEVICE(USB_GIGA_VENDOR_ID, USB_3070_PRODUCT_ID) },
{ USB_DEVICE(USB_GIGA_VENDOR_ID, USB_3075_PRODUCT_ID) },
{ USB_DEVICE(USB_GIGA_VENDOR_ID, USB_SX303_PRODUCT_ID) },
gigaset_unassign(cs);
}
-static struct gigaset_ops gigops = {
+static const struct gigaset_ops gigops = {
gigaset_write_cmd,
gigaset_write_room,
gigaset_chars_in_buffer,
struct cardstate *gigaset_get_cs_by_id(int id)
{
unsigned long flags;
- static struct cardstate *ret = NULL;
- static struct cardstate *cs;
+ struct cardstate *ret = NULL;
+ struct cardstate *cs;
struct gigaset_driver *drv;
unsigned i;
static struct cardstate *gigaset_get_cs_by_minor(unsigned minor)
{
unsigned long flags;
- static struct cardstate *ret = NULL;
+ struct cardstate *ret = NULL;
struct gigaset_driver *drv;
unsigned index;
};
#endif
-static struct resp_type_t resp_type[]=
+static const struct resp_type_t resp_type[] =
{
/*{"", RSP_EMPTY, RT_NOTHING},*/
{"OK", RSP_OK, RT_NOTHING},
unsigned char *argv[MAX_REC_PARAMS + 1];
int params;
int i, j;
- struct resp_type_t *rt;
+ const struct resp_type_t *rt;
int curarg;
unsigned long flags;
unsigned next, tail, head;
* bit 12..10 = number of trailing '1' bits in result
* bit 14..13 = number of bits added by stuffing
*/
-static u16 stufftab[5 * 256] = {
+static const u16 stufftab[5 * 256] = {
// previous 1s = 0:
0x0000, 0x0001, 0x0002, 0x0003, 0x0004, 0x0005, 0x0006, 0x0007, 0x0008, 0x0009, 0x000a, 0x000b, 0x000c, 0x000d, 0x000e, 0x000f,
0x0010, 0x0011, 0x0012, 0x0013, 0x0014, 0x0015, 0x0016, 0x0017, 0x0018, 0x0019, 0x001a, 0x001b, 0x001c, 0x001d, 0x001e, 0x201f,
* (replacing 8 by 7 to make it fit; the algorithm won't care)
* bit 7 set if there are 5 or more "interior" consecutive '1' bits
*/
-static unsigned char bitcounts[256] = {
+static const unsigned char bitcounts[256] = {
0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00, 0x03, 0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00, 0x04,
0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00, 0x03, 0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00, 0x05,
0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00, 0x03, 0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x00, 0x04,
return -EINVAL;
}
-static struct gigaset_ops ops = {
+static const struct gigaset_ops ops = {
gigaset_write_cmd,
gigaset_write_room,
gigaset_chars_in_buffer,
#define USB_M105_PRODUCT_ID 0x0009
/* table of devices that work with this driver */
-static struct usb_device_id gigaset_table [] = {
+static const struct usb_device_id gigaset_table [] = {
{ USB_DEVICE(USB_M105_VENDOR_ID, USB_M105_PRODUCT_ID) },
{ } /* Terminating entry */
};
gigaset_unassign(cs);
}
-static struct gigaset_ops ops = {
+static const struct gigaset_ops ops = {
gigaset_write_cmd,
gigaset_write_room,
gigaset_chars_in_buffer,
{
struct BCState *bcs = container_of(work, struct BCState, tqueue);
- BChannel_bh(bcs);
+ BChannel_bh(work);
if (test_and_clear_bit(B_LL_NOCARRIER, &bcs->event))
ll_deliver_faxstat(bcs, ISDN_FAX_CLASS1_NOCARR);
if (test_and_clear_bit(B_LL_CONNECT, &bcs->event))
r = kvm_arch_ops->hardware_setup();
if (r < 0)
- return r;
+ goto out;
on_each_cpu(kvm_arch_ops->hardware_enable, NULL, 0, 1);
r = register_cpu_notifier(&kvm_cpu_notifier);
out_free_1:
on_each_cpu(kvm_arch_ops->hardware_disable, NULL, 0, 1);
kvm_arch_ops->hardware_unsetup();
+out:
+ kvm_arch_ops = NULL;
return r;
}
(((address) >> PT32_LEVEL_SHIFT(level)) & ((1 << PT32_LEVEL_BITS) - 1))
-#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & PAGE_MASK)
+#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
#define PT64_DIR_BASE_ADDR_MASK \
(PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + PT64_LEVEL_BITS)) - 1))
spte = desc->shadow_ptes[0];
}
BUG_ON(!spte);
- BUG_ON((*spte & PT64_BASE_ADDR_MASK) !=
- page_to_pfn(page) << PAGE_SHIFT);
+ BUG_ON((*spte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT
+ != page_to_pfn(page));
BUG_ON(!(*spte & PT_PRESENT_MASK));
BUG_ON(!(*spte & PT_WRITABLE_MASK));
rmap_printk("rmap_write_protect: spte %p %llx\n", spte, *spte);
return r;
}
+static void mmu_pre_write_zap_pte(struct kvm_vcpu *vcpu,
+ struct kvm_mmu_page *page,
+ u64 *spte)
+{
+ u64 pte;
+ struct kvm_mmu_page *child;
+
+ pte = *spte;
+ if (is_present_pte(pte)) {
+ if (page->role.level == PT_PAGE_TABLE_LEVEL)
+ rmap_remove(vcpu, spte);
+ else {
+ child = page_header(pte & PT64_BASE_ADDR_MASK);
+ mmu_page_remove_parent_pte(vcpu, child, spte);
+ }
+ }
+ *spte = 0;
+}
+
void kvm_mmu_pre_write(struct kvm_vcpu *vcpu, gpa_t gpa, int bytes)
{
gfn_t gfn = gpa >> PAGE_SHIFT;
struct kvm_mmu_page *page;
- struct kvm_mmu_page *child;
struct hlist_node *node, *n;
struct hlist_head *bucket;
unsigned index;
u64 *spte;
- u64 pte;
unsigned offset = offset_in_page(gpa);
unsigned pte_size;
unsigned page_offset;
unsigned misaligned;
int level;
int flooded = 0;
+ int npte;
pgprintk("%s: gpa %llx bytes %d\n", __FUNCTION__, gpa, bytes);
if (gfn == vcpu->last_pt_write_gfn) {
}
page_offset = offset;
level = page->role.level;
+ npte = 1;
if (page->role.glevels == PT32_ROOT_LEVEL) {
- page_offset <<= 1; /* 32->64 */
+ page_offset <<= 1; /* 32->64 */
+ /*
+ * A 32-bit pde maps 4MB while the shadow pdes map
+ * only 2MB. So we need to double the offset again
+ * and zap two pdes instead of one.
+ */
+ if (level == PT32_ROOT_LEVEL) {
+ page_offset &= ~7; /* kill rounding error */
+ page_offset <<= 1;
+ npte = 2;
+ }
page_offset &= ~PAGE_MASK;
}
spte = __va(page->page_hpa);
spte += page_offset / sizeof(*spte);
- pte = *spte;
- if (is_present_pte(pte)) {
- if (level == PT_PAGE_TABLE_LEVEL)
- rmap_remove(vcpu, spte);
- else {
- child = page_header(pte & PT64_BASE_ADDR_MASK);
- mmu_page_remove_parent_pte(vcpu, child, spte);
- }
+ while (npte--) {
+ mmu_pre_write_zap_pte(vcpu, page, spte);
+ ++spte;
}
- *spte = 0;
}
}
data = vmcs_read32(GUEST_SYSENTER_CS);
break;
case MSR_IA32_SYSENTER_EIP:
- data = vmcs_read32(GUEST_SYSENTER_EIP);
+ data = vmcs_readl(GUEST_SYSENTER_EIP);
break;
case MSR_IA32_SYSENTER_ESP:
- data = vmcs_read32(GUEST_SYSENTER_ESP);
+ data = vmcs_readl(GUEST_SYSENTER_ESP);
break;
default:
msr = find_msr_entry(vcpu, msr_index);
vmcs_write32(GUEST_SYSENTER_CS, data);
break;
case MSR_IA32_SYSENTER_EIP:
- vmcs_write32(GUEST_SYSENTER_EIP, data);
+ vmcs_writel(GUEST_SYSENTER_EIP, data);
break;
case MSR_IA32_SYSENTER_ESP:
- vmcs_write32(GUEST_SYSENTER_ESP, data);
+ vmcs_writel(GUEST_SYSENTER_ESP, data);
break;
case MSR_IA32_TIME_STAMP_COUNTER:
guest_write_tsc(data);
{
struct kvm_vmx_segment_field *sf = &kvm_vmx_segment_fields[seg];
- if (vmcs_readl(sf->base) == save->base) {
+ if (vmcs_readl(sf->base) == save->base && (save->base & AR_S_MASK)) {
vmcs_write16(sf->selector, save->selector);
vmcs_writel(sf->base, save->base);
vmcs_write32(sf->limit, save->limit);
[cr2]"i"(offsetof(struct kvm_vcpu, cr2))
: "cc", "memory" );
+ /*
+ * Reload segment selectors ASAP. (it's needed for a functional
+ * kernel: x86 relies on having __KERNEL_PDA in %fs and x86_64
+ * relies on having 0 in %gs for the CPU PDA to work.)
+ */
+ if (fs_gs_ldt_reload_needed) {
+ load_ldt(ldt_sel);
+ load_fs(fs_sel);
+ /*
+ * If we have to reload gs, we must take care to
+ * preserve our gs base.
+ */
+ local_irq_disable();
+ load_gs(gs_sel);
+#ifdef CONFIG_X86_64
+ wrmsrl(MSR_GS_BASE, vmcs_readl(HOST_GS_BASE));
+#endif
+ local_irq_enable();
+
+ reload_tss();
+ }
++kvm_stat.exits;
save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
kvm_run->exit_reason = vmcs_read32(VM_INSTRUCTION_ERROR);
r = 0;
} else {
- if (fs_gs_ldt_reload_needed) {
- load_ldt(ldt_sel);
- load_fs(fs_sel);
- /*
- * If we have to reload gs, we must take care to
- * preserve our gs base.
- */
- local_irq_disable();
- load_gs(gs_sel);
-#ifdef CONFIG_X86_64
- wrmsrl(MSR_GS_BASE, vmcs_readl(HOST_GS_BASE));
-#endif
- local_irq_enable();
-
- reload_tss();
- }
/*
* Profile KVM exit RIPs:
*/
set_current_state(TASK_UNINTERRUPTIBLE);
if (pp->cmd.status != 1)
break;
- spin_lock_irqsave(&pp->lock, flags);
- schedule();
spin_unlock_irqrestore(&pp->lock, flags);
+ schedule();
+ spin_lock_irqsave(&pp->lock, flags);
}
set_current_state(TASK_RUNNING);
remove_wait_queue(&pp->wait, &wait);
/* We need 4 bits per page, rounded up to a multiple of sizeof(unsigned long) */
bitmap->filemap_attr = kzalloc(
- (((num_pages*4/8)+sizeof(unsigned long)-1)
- /sizeof(unsigned long))
- *sizeof(unsigned long),
+ roundup( DIV_ROUND_UP(num_pages*4, 8), sizeof(unsigned long)),
GFP_KERNEL);
if (!bitmap->filemap_attr)
goto out;
for (i=0; i < cnt-1 ; i++) {
sector_t sz = 0;
int j;
- for (j=i; i<cnt-1 && sz < min_spacing ; j++)
+ for (j = i; j < cnt - 1 && sz < min_spacing; j++)
sz += conf->disks[j].size;
if (sz >= min_spacing && sz < conf->hash_spacing)
conf->hash_spacing = sz;
char b[BDEVNAME_SIZE];
struct kobject *ko;
char *s;
+ int err;
if (rdev->mddev) {
MD_BUG();
while ( (s=strchr(rdev->kobj.k_name, '/')) != NULL)
*s = '!';
- list_add(&rdev->same_set, &mddev->disks);
rdev->mddev = mddev;
printk(KERN_INFO "md: bind<%s>\n", b);
rdev->kobj.parent = &mddev->kobj;
- kobject_add(&rdev->kobj);
+ if ((err = kobject_add(&rdev->kobj)))
+ goto fail;
if (rdev->bdev->bd_part)
ko = &rdev->bdev->bd_part->kobj;
else
ko = &rdev->bdev->bd_disk->kobj;
- sysfs_create_link(&rdev->kobj, ko, "block");
+ if ((err = sysfs_create_link(&rdev->kobj, ko, "block"))) {
+ kobject_del(&rdev->kobj);
+ goto fail;
+ }
+ list_add(&rdev->same_set, &mddev->disks);
bd_claim_by_disk(rdev->bdev, rdev, mddev->gendisk);
return 0;
+
+ fail:
+ printk(KERN_WARNING "md: failed to register dev-%s for %s\n",
+ b, mdname(mddev));
+ return err;
+}
+
+static void delayed_delete(struct work_struct *ws)
+{
+ mdk_rdev_t *rdev = container_of(ws, mdk_rdev_t, del_work);
+ kobject_del(&rdev->kobj);
}
static void unbind_rdev_from_array(mdk_rdev_t * rdev)
printk(KERN_INFO "md: unbind<%s>\n", bdevname(rdev->bdev,b));
rdev->mddev = NULL;
sysfs_remove_link(&rdev->kobj, "block");
- kobject_del(&rdev->kobj);
+
+ /* We need to delay this, otherwise we can deadlock when
+ * writing to 'remove' to "dev/state"
+ */
+ INIT_WORK(&rdev->del_work, delayed_delete);
+ schedule_work(&rdev->del_work);
}
/*
mddev->kobj.k_name = NULL;
snprintf(mddev->kobj.name, KOBJ_NAME_LEN, "%s", "md");
mddev->kobj.ktype = &md_ktype;
- kobject_register(&mddev->kobj);
+ if (kobject_register(&mddev->kobj))
+ printk(KERN_WARNING "md: cannot register %s/md - name in use\n",
+ disk->disk_name);
return NULL;
}
bitmap_destroy(mddev);
return err;
}
- if (mddev->pers->sync_request)
- sysfs_create_group(&mddev->kobj, &md_redundancy_group);
- else if (mddev->ro == 2) /* auto-readonly not meaningful */
+ if (mddev->pers->sync_request) {
+ if (sysfs_create_group(&mddev->kobj, &md_redundancy_group))
+ printk(KERN_WARNING
+ "md: cannot register extra attributes for %s\n",
+ mdname(mddev));
+ } else if (mddev->ro == 2) /* auto-readonly not meaningful */
mddev->ro = 0;
atomic_set(&mddev->writes_pending,0);
if (rdev->raid_disk >= 0) {
char nm[20];
sprintf(nm, "rd%d", rdev->raid_disk);
- sysfs_create_link(&mddev->kobj, &rdev->kobj, nm);
+ if (sysfs_create_link(&mddev->kobj, &rdev->kobj, nm))
+ printk("md: cannot register %s for %s\n",
+ nm, mdname(mddev));
}
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
mddev->queue->merge_bvec_fn = NULL;
mddev->queue->unplug_fn = NULL;
mddev->queue->issue_flush_fn = NULL;
+ mddev->queue->backing_dev_info.congested_fn = NULL;
if (mddev->pers->sync_request)
sysfs_remove_group(&mddev->kobj, &md_redundancy_group);
sysfs_remove_link(&mddev->kobj, nm);
}
+ /* make sure all delayed_delete calls have finished */
+ flush_scheduled_work();
+
export_array(mddev);
mddev->array_size = 0;
if (mddev->pers->hot_add_disk(mddev,rdev)) {
char nm[20];
sprintf(nm, "rd%d", rdev->raid_disk);
- sysfs_create_link(&mddev->kobj,
- &rdev->kobj, nm);
+ if (sysfs_create_link(&mddev->kobj,
+ &rdev->kobj, nm))
+ printk(KERN_WARNING
+ "md: cannot register "
+ "%s for %s\n",
+ nm, mdname(mddev));
spares++;
md_new_event(mddev);
} else
}
/* Ok, everything is just fine now */
- sysfs_create_group(&mddev->kobj, &raid5_attrs_group);
+ if (sysfs_create_group(&mddev->kobj, &raid5_attrs_group))
+ printk(KERN_WARNING
+ "raid5: failed to create sysfs attributes for %s\n",
+ mdname(mddev));
mddev->queue->unplug_fn = raid5_unplug_device;
mddev->queue->issue_flush_fn = raid5_issue_flush;
- mddev->queue->backing_dev_info.congested_fn = raid5_congested;
mddev->queue->backing_dev_info.congested_data = mddev;
+ mddev->queue->backing_dev_info.congested_fn = raid5_congested;
mddev->array_size = mddev->size * (conf->previous_raid_disks -
conf->max_degraded);
mddev->thread = NULL;
shrink_stripes(conf);
kfree(conf->stripe_hashtbl);
+ mddev->queue->backing_dev_info.congested_fn = NULL;
blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
sysfs_remove_group(&mddev->kobj, &raid5_attrs_group);
kfree(conf->disks);
added_devices++;
rdev->recovery_offset = 0;
sprintf(nm, "rd%d", rdev->raid_disk);
- sysfs_create_link(&mddev->kobj, &rdev->kobj, nm);
+ if (sysfs_create_link(&mddev->kobj,
+ &rdev->kobj, nm))
+ printk(KERN_WARNING
+ "raid5: failed to create "
+ " link %s for %s\n",
+ nm, mdname(mddev));
} else
break;
}
.spare_active = raid5_spare_active,
.sync_request = sync_request,
.resize = raid5_resize,
+#ifdef CONFIG_MD_RAID5_RESHAPE
+ .check_reshape = raid5_check_reshape,
+ .start_reshape = raid5_start_reshape,
+#endif
.quiesce = raid5_quiesce,
};
tv.tv_usec - ir->base_time.tv_usec;
}
- /* Allow some timmer jitter (RC5 is ~24ms anyway so this is ok) */
+ /* signal we're ready to start a new code */
+ ir->active = 0;
+
+ /* Allow some timer jitter (RC5 is ~24ms anyway so this is ok) */
if (gap < 28000) {
dprintk(1, "ir-common: spurious timer_end\n");
return;
}
- ir->active = 0;
if (ir->last_bit < 20) {
/* ignore spurious codes (caused by light/other remotes) */
dprintk(1, "ir-common: short code: %x\n", ir->code);
struct dvb_device *dvbdev = file->private_data;
struct dmxdev *dmxdev = dvbdev->priv;
- if (mutex_lock_interruptible(&dmxdev->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&dmxdev->mutex);
if ((file->f_flags & O_ACCMODE) == O_WRONLY) {
dmxdev->demux->disconnect_frontend(dmxdev->demux);
static int dvb_dmxdev_filter_free(struct dmxdev *dmxdev,
struct dmxdev_filter *dmxdevfilter)
{
- if (mutex_lock_interruptible(&dmxdev->mutex))
- return -ERESTARTSYS;
-
- if (mutex_lock_interruptible(&dmxdevfilter->mutex)) {
- mutex_unlock(&dmxdev->mutex);
- return -ERESTARTSYS;
- }
+ mutex_lock(&dmxdev->mutex);
+ mutex_lock(&dmxdevfilter->mutex);
dvb_dmxdev_filter_stop(dmxdevfilter);
dvb_dmxdev_filter_reset(dmxdevfilter);
struct dvb_demux *demux = feed->demux;
int ret;
- if (mutex_lock_interruptible(&demux->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&demux->mutex);
if (feed->state < DMX_STATE_GO) {
mutex_unlock(&demux->mutex);
struct dvb_demux *demux = (struct dvb_demux *)dmx;
struct dvb_demux_feed *feed = (struct dvb_demux_feed *)ts_feed;
- if (mutex_lock_interruptible(&demux->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&demux->mutex);
if (feed->state == DMX_STATE_FREE) {
mutex_unlock(&demux->mutex);
struct dvb_demux *dvbdmx = dvbdmxfeed->demux;
int ret;
- if (mutex_lock_interruptible(&dvbdmx->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdmx->mutex);
if (!dvbdmx->stop_feed) {
mutex_unlock(&dvbdmx->mutex);
struct dvb_demux_feed *dvbdmxfeed = (struct dvb_demux_feed *)feed;
struct dvb_demux *dvbdmx = dvbdmxfeed->demux;
- if (mutex_lock_interruptible(&dvbdmx->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdmx->mutex);
if (dvbdmxfilter->feed != dvbdmxfeed) {
mutex_unlock(&dvbdmx->mutex);
struct dvb_demux_feed *dvbdmxfeed = (struct dvb_demux_feed *)feed;
struct dvb_demux *dvbdmx = (struct dvb_demux *)demux;
- if (mutex_lock_interruptible(&dvbdmx->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdmx->mutex);
if (dvbdmxfeed->state == DMX_STATE_FREE) {
mutex_unlock(&dvbdmx->mutex);
if (demux->frontend)
return -EINVAL;
- if (mutex_lock_interruptible(&dvbdemux->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdemux->mutex);
demux->frontend = frontend;
mutex_unlock(&dvbdemux->mutex);
{
struct dvb_demux *dvbdemux = (struct dvb_demux *)demux;
- if (mutex_lock_interruptible(&dvbdemux->mutex))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdemux->mutex);
demux->frontend = NULL;
mutex_unlock(&dvbdemux->mutex);
int id;
- if (mutex_lock_interruptible(&dvbdev_register_lock))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdev_register_lock);
if ((id = dvbdev_get_free_id (adap, type)) < 0){
mutex_unlock(&dvbdev_register_lock);
{
int num;
- if (mutex_lock_interruptible(&dvbdev_register_lock))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdev_register_lock);
if ((num = dvbdev_get_free_adapter_num ()) < 0) {
mutex_unlock(&dvbdev_register_lock);
int dvb_unregister_adapter(struct dvb_adapter *adap)
{
- if (mutex_lock_interruptible(&dvbdev_register_lock))
- return -ERESTARTSYS;
+ mutex_lock(&dvbdev_register_lock);
list_del (&adap->list_head);
mutex_unlock(&dvbdev_register_lock);
return 0;
return -ENOMEM;
input_dev->evbit[0] = BIT(EV_KEY);
- input_dev->keycodesize = sizeof(unsigned char);
- input_dev->keycodemax = KEY_MAX;
input_dev->name = "IR-receiver inside an USB DVB receiver";
input_dev->phys = d->rc_phys;
usb_to_input_id(d->udev, &input_dev->id);
/* detect if it is present or not */
if (isl6421_set_voltage(fe, SEC_VOLTAGE_OFF)) {
kfree(isl6421);
+ fe->sec_priv = NULL;
return NULL;
}
/* set input */
if (state->config->set_pll_input)
- state->config->set_pll_input(buf, 1);
+ state->config->set_pll_input(buf+1, 1);
break;
case VSB_8:
/* Set non-punctured clock for VSB */
/* set input */
if (state->config->set_pll_input)
- state->config->set_pll_input(buf, 0);
+ state->config->set_pll_input(buf+1, 0);
break;
default:
return -EINVAL;
for(i=0; i< cmd->msg_len; i++) {
tda10086_write_byte(state, 0x48+i, cmd->msg[i]);
}
- tda10086_write_byte(state, 0x36, 0x08 | ((cmd->msg_len + 1) << 4));
+ tda10086_write_byte(state, 0x36, 0x08 | ((cmd->msg_len - 1) << 4));
tda10086_diseqc_wait(state);
writel(val, &pluto->io_mem[reg]);
}
+static void pluto_write_tscr(struct pluto *pluto, u32 val)
+{
+ /* set the number of packets */
+ val &= ~TSCR_ADEF;
+ val |= TS_DMA_PACKETS / 2;
+
+ pluto_writereg(pluto, REG_TSCR, val);
+}
+
static void pluto_setsda(void *data, int state)
{
struct pluto *pluto = data;
if (val & TSCR_RSTN) {
val &= ~TSCR_RSTN;
- pluto_writereg(pluto, REG_TSCR, val);
+ pluto_write_tscr(pluto, val);
}
if (reenable) {
val |= TSCR_RSTN;
- pluto_writereg(pluto, REG_TSCR, val);
+ pluto_write_tscr(pluto, val);
}
}
}
/* ACK the interrupt */
- pluto_writereg(pluto, REG_TSCR, tscr | TSCR_IACK);
+ pluto_write_tscr(pluto, tscr | TSCR_IACK);
return IRQ_HANDLED;
}
{
u32 val = pluto_readreg(pluto, REG_TSCR);
- /* set the number of packets */
- val &= ~TSCR_ADEF;
- val |= TS_DMA_PACKETS / 2;
/* disable AFUL and LOCK interrupts */
val |= (TSCR_MSKA | TSCR_MSKL);
/* enable DMA and OVERFLOW interrupts */
/* clear pending interrupts */
val |= TSCR_IACK;
- pluto_writereg(pluto, REG_TSCR, val);
+ pluto_write_tscr(pluto, val);
}
static void pluto_disable_irqs(struct pluto *pluto)
/* clear pending interrupts */
val |= TSCR_IACK;
- pluto_writereg(pluto, REG_TSCR, val);
+ pluto_write_tscr(pluto, val);
}
static int __devinit pluto_hw_init(struct pluto *pluto)
#
menu "Radio Adapters"
- depends on VIDEO_DEV!=n
+ depends on VIDEO_DEV
config RADIO_CADET
tristate "ADS Cadet AM/FM Tuner"
return 0;
}
-static int msp_suspend(struct device * dev, pm_message_t state)
+static int msp_suspend(struct i2c_client *client, pm_message_t state)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
v4l_dbg(1, msp_debug, client, "suspend\n");
msp_reset(client);
return 0;
}
-static int msp_resume(struct device * dev)
+static int msp_resume(struct i2c_client *client)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
v4l_dbg(1, msp_debug, client, "resume\n");
msp_wake_thread(client);
if (msp_reset(client) == -1) {
v4l_dbg(1, msp_debug, client, "msp3400 not found\n");
kfree(client);
- return -1;
+ return 0;
}
state = kmalloc(sizeof(*state), GFP_KERNEL);
v4l_dbg(1, msp_debug, client, "not an msp3400 (cannot read chip version)\n");
kfree(state);
kfree(client);
- return -1;
+ return 0;
}
msp_set_audio(client);
.id = I2C_DRIVERID_MSP3400,
.attach_adapter = msp_probe,
.detach_client = msp_detach,
+ .suspend = msp_suspend,
+ .resume = msp_resume,
.command = msp_command,
.driver = {
.name = "msp3400",
- .suspend = msp_suspend,
- .resume = msp_resume,
},
};
ret |= pvr2_write_register(hdw, 0xaa18, 0x00840000); /*unknown*/
LOCK_TAKE(hdw->ctl_lock); do {
hdw->cmd_buffer[0] = FX2CMD_FWPOST1;
- ret |= pvr2_send_request(hdw,hdw->cmd_buffer,1,0,0);
+ ret |= pvr2_send_request(hdw,hdw->cmd_buffer,1,NULL,0);
hdw->cmd_buffer[0] = FX2CMD_MEMSEL;
hdw->cmd_buffer[1] = 0;
- ret |= pvr2_send_request(hdw,hdw->cmd_buffer,2,0,0);
+ ret |= pvr2_send_request(hdw,hdw->cmd_buffer,2,NULL,0);
} while (0); LOCK_GIVE(hdw->ctl_lock);
if (ret) {
LOCK_TAKE(hdw->ctl_lock); do {
hdw->cmd_buffer[0] = FX2CMD_MEMSEL;
hdw->cmd_buffer[1] = 0;
- ret |= pvr2_send_request(hdw,hdw->cmd_buffer,2,0,0);
+ ret |= pvr2_send_request(hdw,hdw->cmd_buffer,2,NULL,0);
} while (0); LOCK_GIVE(hdw->ctl_lock);
if (ret) {
{
if (vp->dev_video) {
pvr2_v4l2_dev_destroy(vp->dev_video);
- vp->dev_video = 0;
+ vp->dev_video = NULL;
}
if (vp->dev_radio) {
pvr2_v4l2_dev_destroy(vp->dev_radio);
- vp->dev_radio = 0;
+ vp->dev_radio = NULL;
}
pvr2_trace(PVR2_TRACE_STRUCT,"Destroying pvr2_v4l2 id=%p",vp);
{
int mindevnum;
int unit_number;
- int *nr_ptr = 0;
+ int *nr_ptr = NULL;
dip->v4lp = vp;
reg |= 0x10;
} else if (std == V4L2_STD_NTSC_M_JP) {
reg |= 0x40;
- } else if (std == V4L2_STD_SECAM) {
+ } else if (std & V4L2_STD_SECAM) {
reg |= 0x50;
}
saa711x_write(client, R_0E_CHROMA_CNTL_1, reg);
return 0;
}
-static int tuner_suspend(struct device *dev, pm_message_t state)
+static int tuner_suspend(struct i2c_client *c, pm_message_t state)
{
- struct i2c_client *c = container_of (dev, struct i2c_client, dev);
struct tuner *t = i2c_get_clientdata (c);
tuner_dbg ("suspend\n");
return 0;
}
-static int tuner_resume(struct device *dev)
+static int tuner_resume(struct i2c_client *c)
{
- struct i2c_client *c = container_of (dev, struct i2c_client, dev);
struct tuner *t = i2c_get_clientdata (c);
tuner_dbg ("resume\n");
.attach_adapter = tuner_probe,
.detach_client = tuner_detach,
.command = tuner_command,
+ .suspend = tuner_suspend,
+ .resume = tuner_resume,
.driver = {
.name = "tuner",
- .suspend = tuner_suspend,
- .resume = tuner_resume,
},
};
static struct i2c_client client_template = {
static int
mptsas_ioc_reset(MPT_ADAPTER *ioc, int reset_phase)
{
- MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)ioc->sh->hostdata;
+ MPT_SCSI_HOST *hd;
struct mptsas_target_reset_event *target_reset_list, *n;
int rc;
if (reset_phase != MPT_IOC_POST_RESET)
goto out;
- if (!hd || !hd->ioc)
+ if (!ioc->sh || !ioc->sh->hostdata)
+ goto out;
+ hd = (MPT_SCSI_HOST *)ioc->sh->hostdata;
+ if (!hd->ioc)
goto out;
if (list_empty(&hd->target_reset_list))
return BLKPREP_KILL;
}
- /* request is already processed by us, so return */
- if (blk_special_request(req)) {
- osm_debug("REQ_SPECIAL already set!\n");
- req->cmd_flags |= REQ_DONTPREP;
- return BLKPREP_OK;
- }
-
/* connect the i2o_block_request to the request */
if (!req->special) {
ireq = i2o_block_request_alloc();
ireq->i2o_blk_dev = i2o_blk_dev;
req->special = ireq;
ireq->req = req;
- } else
- ireq = req->special;
-
+ }
/* do not come back here */
- req->cmd_type = REQ_TYPE_SPECIAL;
req->cmd_flags |= REQ_DONTPREP;
return BLKPREP_OK;
mode &= 3; /* get current power mode */
- if (unit > ARRAY_SIZE(sm->unit_power)) {
+ if (unit >= ARRAY_SIZE(sm->unit_power)) {
dev_err(dev, "%s: bad unit %d\n", __FUNCTION__, unit);
goto already;
}
if(host->dma_dir == DMA_FROM_DEVICE) {
imxmci_busy_wait_for_status(host, &stat,
- STATUS_APPL_BUFF_FF | STATUS_DATA_TRANS_DONE,
+ STATUS_APPL_BUFF_FF | STATUS_DATA_TRANS_DONE |
+ STATUS_TIME_OUT_READ,
50, "imxmci_cpu_driven_data read");
while((stat & (STATUS_APPL_BUFF_FF | STATUS_DATA_TRANS_DONE)) &&
+ !(stat & STATUS_TIME_OUT_READ) &&
(host->data_cnt < 512)) {
udelay(20); /* required for clocks < 8MHz*/
if(host->dma_size & 0x1ff)
stat &= ~STATUS_CRC_READ_ERR;
+ if(stat & STATUS_TIME_OUT_READ) {
+ dev_dbg(mmc_dev(host->mmc), "imxmci_cpu_driven_data read timeout STATUS = 0x%x\n",
+ stat);
+ trans_done = -1;
+ }
+
} else {
imxmci_busy_wait_for_status(host, &stat,
STATUS_APPL_BUFF_FE,
*/
stat |= host->status_reg;
+ if(test_bit(IMXMCI_PEND_CPU_DATA_b, &host->pending_events))
+ stat &= ~STATUS_CRC_READ_ERR;
+
if(test_bit(IMXMCI_PEND_WAIT_RESP_b, &host->pending_events)) {
imxmci_busy_wait_for_status(host, &stat,
STATUS_END_CMD_RESP | STATUS_ERR_MASK,
tristate "Gianfar Ethernet"
depends on 85xx || 83xx || PPC_86xx
select PHYLIB
+ select CRC32
help
This driver supports the Gigabit TSEC on the MPC83xx, MPC85xx,
and MPC86xx family of chips, and the FEC on the 8540.
when the driver is receiving lots of packets from the card.
config CHELSIO_T3
- tristate "Chelsio Communications T3 10Gb Ethernet support"
- depends on PCI
- help
- This driver supports Chelsio T3-based gigabit and 10Gb Ethernet
- adapters.
+ tristate "Chelsio Communications T3 10Gb Ethernet support"
+ depends on PCI
+ select FW_LOADER
+ help
+ This driver supports Chelsio T3-based gigabit and 10Gb Ethernet
+ adapters.
- For general information about Chelsio and our products, visit
- our website at <http://www.chelsio.com>.
+ For general information about Chelsio and our products, visit
+ our website at <http://www.chelsio.com>.
- For customer support, please visit our customer support page at
- <http://www.chelsio.com/support.htm>.
+ For customer support, please visit our customer support page at
+ <http://www.chelsio.com/support.htm>.
- Please send feedback to <linux-bugs@chelsio.com>.
+ Please send feedback to <linux-bugs@chelsio.com>.
- To compile this driver as a module, choose M here: the module
- will be called cxgb3.
+ To compile this driver as a module, choose M here: the module
+ will be called cxgb3.
config EHEA
tristate "eHEA Ethernet support"
int i;
crc32 = ether_crc_le(6, mc_addr);
- crc32 = ~crc32;
for (i = 0; i < 32; i++)
value |= (((crc32 >> i) & 1) << (31 - i));
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
cso = skb->h.raw - skb->data;
- css = (skb->h.raw + skb->csum) - skb->data;
+ css = (skb->h.raw + skb->csum_offset) - skb->data;
if (unlikely(cso & 0x1)) {
printk(KERN_DEBUG "%s: payload offset != even number\n",
atl1_driver_name);
/* mss will be nonzero if we're doing segment offload (TSO/GSO) */
mss = skb_shinfo(skb)->gso_size;
if (mss) {
- if (skb->protocol == ntohs(ETH_P_IP)) {
+ if (skb->protocol == htons(ETH_P_IP)) {
proto_hdr_len = ((skb->h.raw - skb->data) +
(skb->h.th->doff << 2));
if (unlikely(proto_hdr_len > len)) {
return;
adapter = netdev_priv(netdev);
+
+ /* Some atl1 boards lack persistent storage for their MAC, and get it
+ * from the BIOS during POST. If we've been messing with the MAC
+ * address, we need to save the permanent one.
+ */
+ if (memcmp(adapter->hw.mac_addr, adapter->hw.perm_mac_addr, ETH_ALEN)) {
+ memcpy(adapter->hw.mac_addr, adapter->hw.perm_mac_addr, ETH_ALEN);
+ atl1_set_mac_addr(&adapter->hw);
+ }
+
iowrite16(0, adapter->hw.hw_addr + REG_GPHY_ENABLE);
unregister_netdev(netdev);
pci_iounmap(pdev, adapter->hw.hw_addr);
bw32(bp, B44_RXCONFIG, val);
} else {
unsigned char zero[6] = {0, 0, 0, 0, 0, 0};
- int i = 0;
+ int i = 1;
__b44_set_mac_addr(bp);
#define DRV_MODULE_NAME "bnx2"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "1.5.5"
-#define DRV_MODULE_RELDATE "February 1, 2007"
+#define DRV_MODULE_VERSION "1.5.8"
+#define DRV_MODULE_RELDATE "April 24, 2007"
#define RUN_AT(x) (jiffies + (x))
(sblk->status_tx_quick_consumer_index0 != bp->hw_tx_cons))
return 1;
- if (((sblk->status_attn_bits & STATUS_ATTN_BITS_LINK_STATE) != 0) !=
- bp->link_up)
+ if ((sblk->status_attn_bits & STATUS_ATTN_BITS_LINK_STATE) !=
+ (sblk->status_attn_bits_ack & STATUS_ATTN_BITS_LINK_STATE))
return 1;
return 0;
if ((align_start = (offset32 & 3))) {
offset32 &= ~3;
- len32 += (4 - align_start);
+ len32 += align_start;
+ if (len32 < 4)
+ len32 = 4;
if ((rc = bnx2_nvram_read(bp, offset32, start, 4)))
return rc;
}
if (len32 & 3) {
- if ((len32 > 4) || !align_start) {
- align_end = 4 - (len32 & 3);
- len32 += align_end;
- if ((rc = bnx2_nvram_read(bp, offset32 + len32 - 4,
- end, 4))) {
- return rc;
- }
- }
+ align_end = 4 - (len32 & 3);
+ len32 += align_end;
+ if ((rc = bnx2_nvram_read(bp, offset32 + len32 - 4, end, 4)))
+ return rc;
}
if (align_start || align_end) {
if ((rc = bnx2_enable_nvram_write(bp)) != 0)
goto nvram_write_end;
- /* Erase the page */
- if ((rc = bnx2_nvram_erase_page(bp, page_start)) != 0)
- goto nvram_write_end;
-
- /* Re-enable the write again for the actual write */
- bnx2_enable_nvram_write(bp);
-
/* Loop to write back the buffer data from page_start to
* data_start */
i = 0;
if (bp->flash_info->buffered == 0) {
+ /* Erase the page */
+ if ((rc = bnx2_nvram_erase_page(bp, page_start)) != 0)
+ goto nvram_write_end;
+
+ /* Re-enable the write again for the actual write */
+ bnx2_enable_nvram_write(bp);
+
for (addr = page_start; addr < data_start;
addr += 4, i += 4) {
val = REG_RD(bp, BNX2_MQ_CONFIG);
val &= ~BNX2_MQ_CONFIG_KNL_BYP_BLK_SIZE;
val |= BNX2_MQ_CONFIG_KNL_BYP_BLK_SIZE_256;
+ if (CHIP_ID(bp) == CHIP_ID_5709_A0 || CHIP_ID(bp) == CHIP_ID_5709_A1)
+ val |= BNX2_MQ_CONFIG_HALT_DIS;
+
REG_WR(bp, BNX2_MQ_CONFIG, val);
val = 0x10000 + (MAX_CID_CNT * MB_KERNEL_CTX_SIZE);
#define CHIP_ID_5708_B0 0x57081000
#define CHIP_ID_5708_B1 0x57081010
#define CHIP_ID_5709_A0 0x57090000
+#define CHIP_ID_5709_A1 0x57090010
#define CHIP_BOND_ID(bp) (((bp)->chip_id) & 0xf)
};
enum {
- SUPPORTED_OFFLOAD = 1 << 24,
- SUPPORTED_IRQ = 1 << 25
+ SUPPORTED_IRQ = 1 << 24
};
enum { /* adapter interrupt-maintained statistics */
unsigned long serdes_signal_loss;
unsigned long xaui_pcs_ctc_err;
unsigned long xaui_pcs_align_change;
+
+ unsigned long num_toggled; /* # times toggled TxEn due to stuck TX */
+ unsigned long num_resets; /* # times reset due to stuck TX */
+
};
struct tp_mib_stats {
MC5_MODE_72_BIT = 2
};
+/* MC5 min active region size */
+enum { MC5_MIN_TIDS = 16 };
+
struct vpd_params {
unsigned int cclk;
unsigned int mclk;
unsigned int stats_update_period; /* MAC stats accumulation period */
unsigned int linkpoll_period; /* link poll period in 0.1s */
unsigned int rev; /* chip revision */
+ unsigned int offload;
+};
+
+enum { /* chip revisions */
+ T3_REV_A = 0,
+ T3_REV_B = 2,
+ T3_REV_B2 = 3,
};
struct trace_params {
struct adapter *adapter;
unsigned int offset;
unsigned int nucast; /* # of address filters for unicast MACs */
+ unsigned int tx_tcnt;
+ unsigned int tx_xcnt;
+ u64 tx_mcnt;
+ unsigned int rx_xcnt;
+ u64 rx_mcnt;
+ unsigned int toggle_cnt;
+ unsigned int txen;
struct mac_stats stats;
};
static inline int is_offload(const struct adapter *adap)
{
- return adapter_info(adap)->caps & SUPPORTED_OFFLOAD;
+ return adap->params.offload;
}
static inline unsigned int core_ticks_per_usec(const struct adapter *adap)
int t3_mac_set_num_ucast(struct cmac *mac, int n);
const struct mac_stats *t3_mac_update_stats(struct cmac *mac);
int t3_mac_set_speed_duplex_fc(struct cmac *mac, int speed, int duplex, int fc);
+int t3b2_mac_watchdog_task(struct cmac *mac);
void t3_mc5_prep(struct adapter *adapter, struct mc5 *mc5, int mode);
int t3_mc5_init(struct mc5 *mc5, unsigned int nservers, unsigned int nfilters,
static inline struct t3c_tid_entry *lookup_tid(const struct tid_info *t,
unsigned int tid)
{
- return tid < t->ntids ? &(t->tid_tab[tid]) : NULL;
+ struct t3c_tid_entry *t3c_tid = tid < t->ntids ?
+ &(t->tid_tab[tid]) : NULL;
+
+ return (t3c_tid && t3c_tid->client) ? t3c_tid : NULL;
}
/*
#include <linux/workqueue.h>
#include <linux/proc_fs.h>
#include <linux/rtnetlink.h>
+#include <linux/firmware.h>
#include <asm/uaccess.h>
#include "common.h"
int speed, int duplex, int pause)
{
struct net_device *dev = adapter->port[port_id];
+ struct port_info *pi = netdev_priv(dev);
+ struct cmac *mac = &pi->mac;
/* Skip changes from disabled ports. */
if (!netif_running(dev))
return;
if (link_stat != netif_carrier_ok(dev)) {
- if (link_stat)
+ if (link_stat) {
+ t3_mac_enable(mac, MAC_DIRECTION_RX);
netif_carrier_on(dev);
- else
+ } else {
netif_carrier_off(dev);
+ pi->phy.ops->power_down(&pi->phy, 1);
+ t3_mac_disable(mac, MAC_DIRECTION_RX);
+ t3_link_start(&pi->phy, mac, &pi->link_config);
+ }
+
link_report(dev);
}
}
static int setup_sge_qsets(struct adapter *adap)
{
int i, j, err, irq_idx = 0, qset_idx = 0, dummy_dev_idx = 0;
- unsigned int ntxq = is_offload(adap) ? SGE_TXQ_PER_SET : 1;
+ unsigned int ntxq = SGE_TXQ_PER_SET;
if (adap->params.rev > 0 && !(adap->flags & USING_MSI))
irq_idx = -1;
static ssize_t set_nfilters(struct net_device *dev, unsigned int val)
{
struct adapter *adap = dev->priv;
+ int min_tids = is_offload(adap) ? MC5_MIN_TIDS : 0;
if (adap->flags & FULL_INIT_DONE)
return -EBUSY;
if (val && adap->params.rev == 0)
return -EINVAL;
- if (val > t3_mc5_size(&adap->mc5) - adap->params.mc5.nservers)
+ if (val > t3_mc5_size(&adap->mc5) - adap->params.mc5.nservers -
+ min_tids)
return -EINVAL;
adap->params.mc5.nfilters = val;
return 0;
if (adap->flags & FULL_INIT_DONE)
return -EBUSY;
- if (val > t3_mc5_size(&adap->mc5) - adap->params.mc5.nfilters)
+ if (val > t3_mc5_size(&adap->mc5) - adap->params.mc5.nfilters -
+ MC5_MIN_TIDS)
return -EINVAL;
adap->params.mc5.nservers = val;
return 0;
}
}
+#define FW_FNAME "t3fw-%d.%d.%d.bin"
+
+static int upgrade_fw(struct adapter *adap)
+{
+ int ret;
+ char buf[64];
+ const struct firmware *fw;
+ struct device *dev = &adap->pdev->dev;
+
+ snprintf(buf, sizeof(buf), FW_FNAME, FW_VERSION_MAJOR,
+ FW_VERSION_MINOR, FW_VERSION_MICRO);
+ ret = request_firmware(&fw, buf, dev);
+ if (ret < 0) {
+ dev_err(dev, "could not upgrade firmware: unable to load %s\n",
+ buf);
+ return ret;
+ }
+ ret = t3_load_fw(adap, fw->data, fw->size);
+ release_firmware(fw);
+ return ret;
+}
+
/**
* cxgb_up - enable the adapter
* @adapter: adapter being enabled
if (!(adap->flags & FULL_INIT_DONE)) {
err = t3_check_fw_version(adap);
+ if (err == -EINVAL)
+ err = upgrade_fw(adap);
if (err)
goto out;
if (err)
goto out;
+ t3_write_reg(adap, A_ULPRX_TDDP_PSZ, V_HPZ0(PAGE_SHIFT - 12));
+
err = setup_sge_qsets(adap);
if (err)
goto out;
return err;
set_bit(pi->port_id, &adapter->open_device_map);
- if (!ofld_disable) {
+ if (is_offload(adapter) && !ofld_disable) {
err = offload_open(dev);
if (err)
printk(KERN_WARNING
"VLANinsertions ",
"TxCsumOffload ",
"RxCsumGood ",
- "RxDrops "
+ "RxDrops ",
+
+ "CheckTXEnToggled ",
+ "CheckResets ",
+
};
static int get_stats_count(struct net_device *dev)
*data++ = collect_sge_port_stats(adapter, pi, SGE_PSTAT_TX_CSUM);
*data++ = collect_sge_port_stats(adapter, pi, SGE_PSTAT_RX_CSUM_GOOD);
*data++ = s->rx_cong_drops;
+
+ *data++ = s->num_toggled;
+ *data++ = s->num_resets;
}
static inline void reg_block_dump(struct adapter *ap, void *buf,
static void get_sge_param(struct net_device *dev, struct ethtool_ringparam *e)
{
- struct adapter *adapter = dev->priv;
+ const struct adapter *adapter = dev->priv;
+ const struct port_info *pi = netdev_priv(dev);
+ const struct qset_params *q = &adapter->params.sge.qset[pi->first_qset];
e->rx_max_pending = MAX_RX_BUFFERS;
e->rx_mini_max_pending = 0;
e->rx_jumbo_max_pending = MAX_RX_JUMBO_BUFFERS;
e->tx_max_pending = MAX_TXQ_ENTRIES;
- e->rx_pending = adapter->params.sge.qset[0].fl_size;
- e->rx_mini_pending = adapter->params.sge.qset[0].rspq_size;
- e->rx_jumbo_pending = adapter->params.sge.qset[0].jumbo_size;
- e->tx_pending = adapter->params.sge.qset[0].txq_size[0];
+ e->rx_pending = q->fl_size;
+ e->rx_mini_pending = q->rspq_size;
+ e->rx_jumbo_pending = q->jumbo_size;
+ e->tx_pending = q->txq_size[0];
}
static int set_sge_param(struct net_device *dev, struct ethtool_ringparam *e)
{
int i;
+ struct qset_params *q;
struct adapter *adapter = dev->priv;
+ const struct port_info *pi = netdev_priv(dev);
if (e->rx_pending > MAX_RX_BUFFERS ||
e->rx_jumbo_pending > MAX_RX_JUMBO_BUFFERS ||
if (adapter->flags & FULL_INIT_DONE)
return -EBUSY;
- for (i = 0; i < SGE_QSETS; ++i) {
- struct qset_params *q = &adapter->params.sge.qset[i];
-
+ q = &adapter->params.sge.qset[pi->first_qset];
+ for (i = 0; i < pi->nqsets; ++i, ++q) {
q->rspq_size = e->rx_mini_pending;
q->fl_size = e->rx_pending;
q->jumbo_size = e->rx_jumbo_pending;
}
}
+static void check_t3b2_mac(struct adapter *adapter)
+{
+ int i;
+
+ if (!rtnl_trylock()) /* synchronize with ifdown */
+ return;
+
+ for_each_port(adapter, i) {
+ struct net_device *dev = adapter->port[i];
+ struct port_info *p = netdev_priv(dev);
+ int status;
+
+ if (!netif_running(dev))
+ continue;
+
+ status = 0;
+ if (netif_running(dev) && netif_carrier_ok(dev))
+ status = t3b2_mac_watchdog_task(&p->mac);
+ if (status == 1)
+ p->mac.stats.num_toggled++;
+ else if (status == 2) {
+ struct cmac *mac = &p->mac;
+
+ t3_mac_set_mtu(mac, dev->mtu);
+ t3_mac_set_address(mac, 0, dev->dev_addr);
+ cxgb_set_rxmode(dev);
+ t3_link_start(&p->phy, mac, &p->link_config);
+ t3_mac_enable(mac, MAC_DIRECTION_RX | MAC_DIRECTION_TX);
+ t3_port_intr_enable(adapter, p->port_id);
+ p->mac.stats.num_resets++;
+ }
+ }
+ rtnl_unlock();
+}
+
+
static void t3_adap_check_task(struct work_struct *work)
{
struct adapter *adapter = container_of(work, struct adapter,
adapter->check_task_cnt = 0;
}
+ if (p->rev == T3_REV_B2)
+ check_t3b2_mac(adapter);
+
/* Schedule the next check update if any port is active. */
spin_lock(&adapter->work_lock);
if (adapter->open_device_map & PORT_MASK)
if (!test_bit(i, &adap->registered_device_map))
continue;
- printk(KERN_INFO "%s: %s %s RNIC (rev %d) %s%s\n",
+ printk(KERN_INFO "%s: %s %s %sNIC (rev %d) %s%s\n",
dev->name, ai->desc, pi->port_type->desc,
- adap->params.rev, buf,
+ is_offload(adap) ? "R" : "", adap->params.rev, buf,
(adap->flags & USING_MSIX) ? " MSI-X" :
(adap->flags & USING_MSI) ? " MSI" : "");
if (adap->name == dev->name && adap->params.vpd.mclk)
spin_lock_bh(&td->tid_release_lock);
p->ctx = (void *)td->tid_release_list;
+ p->client = NULL;
td->tid_release_list = p;
if (!p->ctx)
schedule_work(&td->tid_release_task);
struct tid_info *t = &(T3C_DATA(tdev))->tid_maps;
spin_lock_bh(&t->atid_lock);
- if (t->afree) {
+ if (t->afree &&
+ t->atids_in_use + atomic_read(&t->tids_in_use) + MC5_MIN_TIDS <=
+ t->ntids) {
union active_open_entry *p = t->afree;
atid = (p - t->atid_tab) + t->atid_base;
struct t3c_tid_entry *t3c_tid;
t3c_tid = lookup_atid(&(T3C_DATA(dev))->tid_maps, atid);
- if (t3c_tid->ctx && t3c_tid->client && t3c_tid->client->handlers &&
+ if (t3c_tid && t3c_tid->ctx && t3c_tid->client &&
+ t3c_tid->client->handlers &&
t3c_tid->client->handlers[CPL_ACT_OPEN_RPL]) {
return t3c_tid->client->handlers[CPL_ACT_OPEN_RPL] (dev, skb,
t3c_tid->
struct t3c_tid_entry *t3c_tid;
t3c_tid = lookup_stid(&(T3C_DATA(dev))->tid_maps, stid);
- if (t3c_tid->ctx && t3c_tid->client->handlers &&
+ if (t3c_tid && t3c_tid->ctx && t3c_tid->client->handlers &&
t3c_tid->client->handlers[p->opcode]) {
return t3c_tid->client->handlers[p->opcode] (dev, skb,
t3c_tid->ctx);
struct t3c_tid_entry *t3c_tid;
t3c_tid = lookup_tid(&(T3C_DATA(dev))->tid_maps, hwtid);
- if (t3c_tid->ctx && t3c_tid->client->handlers &&
+ if (t3c_tid && t3c_tid->ctx && t3c_tid->client->handlers &&
t3c_tid->client->handlers[p->opcode]) {
return t3c_tid->client->handlers[p->opcode]
(dev, skb, t3c_tid->ctx);
}
}
+/*
+ * Returns an sk_buff for a reply CPL message of size len. If the input
+ * sk_buff has no other users it is trimmed and reused, otherwise a new buffer
+ * is allocated. The input skb must be of size at least len. Note that this
+ * operation does not destroy the original skb data even if it decides to reuse
+ * the buffer.
+ */
+static struct sk_buff *cxgb3_get_cpl_reply_skb(struct sk_buff *skb, size_t len,
+ int gfp)
+{
+ if (likely(!skb_cloned(skb))) {
+ BUG_ON(skb->len < len);
+ __skb_trim(skb, len);
+ skb_get(skb);
+ } else {
+ skb = alloc_skb(len, gfp);
+ if (skb)
+ __skb_put(skb, len);
+ }
+ return skb;
+}
+
static int do_abort_req_rss(struct t3cdev *dev, struct sk_buff *skb)
{
union opcode_tid *p = cplhdr(skb);
struct t3c_tid_entry *t3c_tid;
t3c_tid = lookup_tid(&(T3C_DATA(dev))->tid_maps, hwtid);
- if (t3c_tid->ctx && t3c_tid->client->handlers &&
+ if (t3c_tid && t3c_tid->ctx && t3c_tid->client->handlers &&
t3c_tid->client->handlers[p->opcode]) {
return t3c_tid->client->handlers[p->opcode]
(dev, skb, t3c_tid->ctx);
} else {
struct cpl_abort_req_rss *req = cplhdr(skb);
struct cpl_abort_rpl *rpl;
+ struct sk_buff *reply_skb;
+ unsigned int tid = GET_TID(req);
+ u8 cmd = req->status;
+
+ if (req->status == CPL_ERR_RTX_NEG_ADVICE ||
+ req->status == CPL_ERR_PERSIST_NEG_ADVICE)
+ goto out;
- struct sk_buff *skb =
- alloc_skb(sizeof(struct cpl_abort_rpl), GFP_ATOMIC);
- if (!skb) {
+ reply_skb = cxgb3_get_cpl_reply_skb(skb,
+ sizeof(struct
+ cpl_abort_rpl),
+ GFP_ATOMIC);
+
+ if (!reply_skb) {
printk("do_abort_req_rss: couldn't get skb!\n");
goto out;
}
- skb->priority = CPL_PRIORITY_DATA;
- __skb_put(skb, sizeof(struct cpl_abort_rpl));
- rpl = cplhdr(skb);
+ reply_skb->priority = CPL_PRIORITY_DATA;
+ __skb_put(reply_skb, sizeof(struct cpl_abort_rpl));
+ rpl = cplhdr(reply_skb);
rpl->wr.wr_hi =
htonl(V_WR_OP(FW_WROPCODE_OFLD_HOST_ABORT_CON_RPL));
- rpl->wr.wr_lo = htonl(V_WR_TID(GET_TID(req)));
- OPCODE_TID(rpl) =
- htonl(MK_OPCODE_TID(CPL_ABORT_RPL, GET_TID(req)));
- rpl->cmd = req->status;
- cxgb3_ofld_send(dev, skb);
+ rpl->wr.wr_lo = htonl(V_WR_TID(tid));
+ OPCODE_TID(rpl) = htonl(MK_OPCODE_TID(CPL_ABORT_RPL, tid));
+ rpl->cmd = cmd;
+ cxgb3_ofld_send(dev, reply_skb);
out:
return CPL_RET_BUF_DONE;
}
struct t3c_tid_entry *t3c_tid;
t3c_tid = lookup_atid(&(T3C_DATA(dev))->tid_maps, atid);
- if (t3c_tid->ctx && t3c_tid->client->handlers &&
+ if (t3c_tid && t3c_tid->ctx && t3c_tid->client->handlers &&
t3c_tid->client->handlers[CPL_ACT_ESTABLISH]) {
return t3c_tid->client->handlers[CPL_ACT_ESTABLISH]
(dev, skb, t3c_tid->ctx);
}
}
-static int do_set_tcb_rpl(struct t3cdev *dev, struct sk_buff *skb)
-{
- struct cpl_set_tcb_rpl *rpl = cplhdr(skb);
-
- if (rpl->status != CPL_ERR_NONE)
- printk(KERN_ERR
- "Unexpected SET_TCB_RPL status %u for tid %u\n",
- rpl->status, GET_TID(rpl));
- return CPL_RET_BUF_DONE;
-}
-
static int do_trace(struct t3cdev *dev, struct sk_buff *skb)
{
struct cpl_trace_pkt *p = cplhdr(skb);
struct t3c_tid_entry *t3c_tid;
t3c_tid = lookup_tid(&(T3C_DATA(dev))->tid_maps, hwtid);
- if (t3c_tid->ctx && t3c_tid->client->handlers &&
+ if (t3c_tid && t3c_tid->ctx && t3c_tid->client->handlers &&
t3c_tid->client->handlers[opcode]) {
return t3c_tid->client->handlers[opcode] (dev, skb,
t3c_tid->ctx);
for (tid = 0; tid < ti->ntids; tid++) {
te = lookup_tid(ti, tid);
BUG_ON(!te);
- if (te->ctx && te->client && te->client->redirect) {
+ if (te && te->ctx && te->client && te->client->redirect) {
update_tcb = te->client->redirect(te->ctx, old, new, e);
if (update_tcb) {
l2t_hold(L2DATA(tdev), e);
t3_register_cpl_handler(CPL_CLOSE_CON_RPL, do_hwtid_rpl);
t3_register_cpl_handler(CPL_ABORT_REQ_RSS, do_abort_req_rss);
t3_register_cpl_handler(CPL_ACT_ESTABLISH, do_act_establish);
- t3_register_cpl_handler(CPL_SET_TCB_RPL, do_set_tcb_rpl);
+ t3_register_cpl_handler(CPL_SET_TCB_RPL, do_hwtid_rpl);
+ t3_register_cpl_handler(CPL_GET_TCB_RPL, do_hwtid_rpl);
t3_register_cpl_handler(CPL_RDMA_TERMINATE, do_term);
t3_register_cpl_handler(CPL_RDMA_EC_STATUS, do_hwtid_rpl);
t3_register_cpl_handler(CPL_TRACE_PKT, do_trace);
unsigned int tcam_size = mc5->tcam_size;
struct adapter *adap = mc5->adapter;
+ if (!tcam_size)
+ return 0;
+
if (nroutes > MAX_ROUTES || nroutes + nservers + nfilters > tcam_size)
return -EINVAL;
#define A_TP_RX_TRC_KEY0 0x120
+#define A_TP_TX_DROP_CNT_CH0 0x12d
+
+#define S_TXDROPCNTCH0RCVD 0
+#define M_TXDROPCNTCH0RCVD 0xffff
+#define V_TXDROPCNTCH0RCVD(x) ((x) << S_TXDROPCNTCH0RCVD)
+#define G_TXDROPCNTCH0RCVD(x) (((x) >> S_TXDROPCNTCH0RCVD) & \
+ M_TXDROPCNTCH0RCVD)
+
#define A_ULPRX_CTL 0x500
#define S_ROUND_ROBIN 4
#define A_ULPRX_ISCSI_TAGMASK 0x514
+#define S_HPZ0 0
+#define M_HPZ0 0xf
+#define V_HPZ0(x) ((x) << S_HPZ0)
+#define G_HPZ0(x) (((x) >> S_HPZ0) & M_HPZ0)
+
#define A_ULPRX_TDDP_LLIMIT 0x51c
#define A_ULPRX_TDDP_ULIMIT 0x520
+#define A_ULPRX_TDDP_PSZ 0x528
#define A_ULPRX_STAG_LLIMIT 0x52c
#define V_TXPAUSEEN(x) ((x) << S_TXPAUSEEN)
#define F_TXPAUSEEN V_TXPAUSEEN(1U)
+#define A_XGM_TX_PAUSE_QUANTA 0x808
+
#define A_XGM_RX_CTRL 0x80c
#define S_RXEN 0
#define A_XGM_TXFIFO_CFG 0x888
+#define S_TXIPG 13
+#define M_TXIPG 0xff
+#define V_TXIPG(x) ((x) << S_TXIPG)
+#define G_TXIPG(x) (((x) >> S_TXIPG) & M_TXIPG)
+
#define S_TXFIFOTHRESH 4
#define M_TXFIFOTHRESH 0x1ff
#define V_TXFIFOTHRESH(x) ((x) << S_TXFIFOTHRESH)
+#define S_ENDROPPKT 21
+#define V_ENDROPPKT(x) ((x) << S_ENDROPPKT)
+#define F_ENDROPPKT V_ENDROPPKT(1U)
+
#define A_XGM_SERDES_CTRL 0x890
#define A_XGM_SERDES_CTRL0 0x8e0
#define A_XGM_RX_MAX_PKT_SIZE_ERR_CNT 0x9a4
+#define A_XGM_TX_SPI4_SOP_EOP_CNT 0x9a8
+
+#define S_TXSPI4SOPCNT 16
+#define M_TXSPI4SOPCNT 0xffff
+#define V_TXSPI4SOPCNT(x) ((x) << S_TXSPI4SOPCNT)
+#define G_TXSPI4SOPCNT(x) (((x) >> S_TXSPI4SOPCNT) & M_TXSPI4SOPCNT)
+
#define A_XGM_RX_SPI4_SOP_EOP_CNT 0x9ac
#define XGMAC0_1_BASE_ADDR 0xa00
q->txq[TXQ_ETH].stop_thres = nports *
flits_to_desc(sgl_len(MAX_SKB_FRAGS + 1) + 3);
- if (ntxq == 1) {
+ if (!is_offload(adapter)) {
#ifdef USE_RX_PAGE
q->fl[0].buf_size = RX_PAGE_SIZE;
#else
{2, 0, 0, 0,
F_GPIO2_OEN | F_GPIO4_OEN |
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, F_GPIO3 | F_GPIO5,
- SUPPORTED_OFFLOAD,
+ 0,
&mi1_mdio_ops, "Chelsio PE9000"},
{2, 0, 0, 0,
F_GPIO2_OEN | F_GPIO4_OEN |
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, F_GPIO3 | F_GPIO5,
- SUPPORTED_OFFLOAD,
+ 0,
&mi1_mdio_ops, "Chelsio T302"},
{1, 0, 0, 0,
F_GPIO1_OEN | F_GPIO6_OEN | F_GPIO7_OEN | F_GPIO10_OEN |
F_GPIO1_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL, 0,
- SUPPORTED_10000baseT_Full | SUPPORTED_AUI | SUPPORTED_OFFLOAD,
+ SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
&mi1_mdio_ext_ops, "Chelsio T310"},
{2, 0, 0, 0,
F_GPIO1_OEN | F_GPIO2_OEN | F_GPIO4_OEN | F_GPIO5_OEN | F_GPIO6_OEN |
F_GPIO7_OEN | F_GPIO10_OEN | F_GPIO11_OEN | F_GPIO1_OUT_VAL |
F_GPIO5_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL, 0,
- SUPPORTED_10000baseT_Full | SUPPORTED_AUI | SUPPORTED_OFFLOAD,
+ SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
&mi1_mdio_ext_ops, "Chelsio T320"},
};
SF_ERASE_SECTOR = 0xd8, /* erase sector */
FW_FLASH_BOOT_ADDR = 0x70000, /* start address of FW in flash */
- FW_VERS_ADDR = 0x77ffc /* flash address holding FW version */
+ FW_VERS_ADDR = 0x77ffc, /* flash address holding FW version */
+ FW_MIN_SIZE = 8 /* at least version and csum */
};
/**
const u32 *p = (const u32 *)fw_data;
int ret, addr, fw_sector = FW_FLASH_BOOT_ADDR >> 16;
- if (size & 3)
+ if ((size & 3) || size < FW_MIN_SIZE)
return -EINVAL;
if (size > FW_VERS_ADDR + 8 - FW_FLASH_BOOT_ADDR)
return -EFBIG;
*/
int t3_phy_intr_handler(struct adapter *adapter)
{
- static const int intr_gpio_bits[] = { 8, 0x20 };
-
+ u32 mask, gpi = adapter_info(adapter)->gpio_intr;
u32 i, cause = t3_read_reg(adapter, A_T3DBG_INT_CAUSE);
for_each_port(adapter, i) {
- if (cause & intr_gpio_bits[i]) {
- struct cphy *phy = &adap2pinfo(adapter, i)->phy;
- int phy_cause = phy->ops->intr_handler(phy);
+ struct port_info *p = adap2pinfo(adapter, i);
+
+ mask = gpi - (gpi & (gpi - 1));
+ gpi -= mask;
+
+ if (!(p->port_type->caps & SUPPORTED_IRQ))
+ continue;
+
+ if (cause & mask) {
+ int phy_cause = p->phy.ops->intr_handler(&p->phy);
if (phy_cause & cphy_cause_link_change)
t3_link_changed(adapter, i);
if (phy_cause & cphy_cause_fifo_error)
- phy->fifo_errors++;
+ p->phy.fifo_errors++;
}
}
struct adapter *adapter = mc7->adapter;
const struct mc7_timing_params *p = &mc7_timings[mem_type];
+ if (!mc7->size)
+ return 0;
+
val = t3_read_reg(adapter, mc7->offset + A_MC7_CFG);
slow = val & F_SLOW;
width = G_WIDTH(val);
do { /* wait for uP to initialize */
msleep(20);
} while (t3_read_reg(adapter, A_CIM_HOST_ACC_DATA) && --attempts);
- if (!attempts)
+ if (!attempts) {
+ CH_ERR(adapter, "uP initialization timed out\n");
goto out_err;
+ }
err = 0;
out_err:
mc7->name = name;
mc7->offset = base_addr - MC7_PMRX_BASE_ADDR;
cfg = t3_read_reg(adapter, mc7->offset + A_MC7_CFG);
- mc7->size = mc7_calc_size(cfg);
+ mc7->size = mc7->size = G_DEN(cfg) == M_DEN ? 0 : mc7_calc_size(cfg);
mc7->width = G_WIDTH(cfg);
}
V_I2C_CLKDIV(adapter->params.vpd.cclk / 80 - 1));
t3_write_reg(adapter, A_T3DBG_GPIO_EN,
ai->gpio_out | F_GPIO0_OEN | F_GPIO0_OUT_VAL);
+ t3_write_reg(adapter, A_MC5_DB_SERVER_INDEX, 0);
if (adapter->params.rev == 0 || !uses_xaui(adapter))
val |= F_ENRGMII;
}
/*
- * Reset the adapter. PCIe cards lose their config space during reset, PCI-X
+ * Reset the adapter.
+ * Older PCIe cards lose their config space during reset, PCI-X
* ones don't.
*/
int t3_reset_adapter(struct adapter *adapter)
{
- int i;
+ int i, save_and_restore_pcie =
+ adapter->params.rev < T3_REV_B2 && is_pcie(adapter);
uint16_t devid = 0;
- if (is_pcie(adapter))
+ if (save_and_restore_pcie)
pci_save_state(adapter->pdev);
t3_write_reg(adapter, A_PL_RST, F_CRSTWRM | F_CRSTWRMMODE);
if (devid != 0x1425)
return -1;
- if (is_pcie(adapter))
+ if (save_and_restore_pcie)
pci_restore_state(adapter->pdev);
return 0;
}
p->tx_num_pgs = pm_num_pages(p->chan_tx_size, p->tx_pg_size);
p->ntimer_qs = p->cm_size >= (128 << 20) ||
adapter->params.rev > 0 ? 12 : 6;
+ }
+
+ adapter->params.offload = t3_mc7_size(&adapter->pmrx) &&
+ t3_mc7_size(&adapter->pmtx) &&
+ t3_mc7_size(&adapter->cm);
+ if (is_offload(adapter)) {
adapter->params.mc5.nservers = DEFAULT_NSERVERS;
adapter->params.mc5.nfilters = adapter->params.rev > 0 ?
DEFAULT_NFILTERS : 0;
#define DRV_NAME "cxgb3"
/* Driver version */
#define DRV_VERSION "1.0-ko"
+
+/* Firmware version */
#define FW_VERSION_MAJOR 3
-#define FW_VERSION_MINOR 2
+#define FW_VERSION_MINOR 3
+#define FW_VERSION_MICRO 0
#endif /* __CHELSIO_VERSION_H */
xaui_serdes_reset(mac);
}
- if (adap->params.rev > 0)
- t3_write_reg(adap, A_XGM_PAUSE_TIMER + oft, 0xf000);
-
val = F_MAC_RESET_;
if (is_10G(adap))
val |= F_PCS_RESET_;
return 0;
}
+int t3b2_mac_reset(struct cmac *mac)
+{
+ struct adapter *adap = mac->adapter;
+ unsigned int oft = mac->offset;
+ u32 val;
+
+ if (!macidx(mac))
+ t3_set_reg_field(adap, A_MPS_CFG, F_PORT0ACTIVE, 0);
+ else
+ t3_set_reg_field(adap, A_MPS_CFG, F_PORT1ACTIVE, 0);
+
+ t3_write_reg(adap, A_XGM_RESET_CTRL + oft, F_MAC_RESET_);
+ t3_read_reg(adap, A_XGM_RESET_CTRL + oft); /* flush */
+
+ msleep(10);
+
+ /* Check for xgm Rx fifo empty */
+ if (t3_wait_op_done(adap, A_XGM_RX_MAX_PKT_SIZE_ERR_CNT + oft,
+ 0x80000000, 1, 5, 2)) {
+ CH_ERR(adap, "MAC %d Rx fifo drain failed\n",
+ macidx(mac));
+ return -1;
+ }
+
+ t3_write_reg(adap, A_XGM_RESET_CTRL + oft, 0);
+ t3_read_reg(adap, A_XGM_RESET_CTRL + oft); /* flush */
+
+ val = F_MAC_RESET_;
+ if (is_10G(adap))
+ val |= F_PCS_RESET_;
+ else if (uses_xaui(adap))
+ val |= F_PCS_RESET_ | F_XG2G_RESET_;
+ else
+ val |= F_RGMII_RESET_ | F_XG2G_RESET_;
+ t3_write_reg(adap, A_XGM_RESET_CTRL + oft, val);
+ t3_read_reg(adap, A_XGM_RESET_CTRL + oft); /* flush */
+ if ((val & F_PCS_RESET_) && adap->params.rev) {
+ msleep(1);
+ t3b_pcs_reset(mac);
+ }
+ t3_write_reg(adap, A_XGM_RX_CFG + oft,
+ F_DISPAUSEFRAMES | F_EN1536BFRAMES |
+ F_RMFCS | F_ENJUMBO | F_ENHASHMCAST);
+
+ if (!macidx(mac))
+ t3_set_reg_field(adap, A_MPS_CFG, 0, F_PORT0ACTIVE);
+ else
+ t3_set_reg_field(adap, A_MPS_CFG, 0, F_PORT1ACTIVE);
+
+ return 0;
+}
+
/*
* Set the exact match register 'idx' to recognize the given Ethernet address.
*/
* Adjust the PAUSE frame watermarks. We always set the LWM, and the
* HWM only if flow-control is enabled.
*/
- hwm = max(MAC_RXFIFO_SIZE - 3 * mtu, MAC_RXFIFO_SIZE / 2U);
- hwm = min(hwm, 3 * MAC_RXFIFO_SIZE / 4 + 1024);
- lwm = hwm - 1024;
+ hwm = max_t(unsigned int, MAC_RXFIFO_SIZE - 3 * mtu,
+ MAC_RXFIFO_SIZE * 38 / 100);
+ hwm = min(hwm, MAC_RXFIFO_SIZE - 8192);
+ lwm = min(3 * (int)mtu, MAC_RXFIFO_SIZE / 4);
+
v = t3_read_reg(adap, A_XGM_RXFIFO_CFG + mac->offset);
v &= ~V_RXFIFOPAUSELWM(M_RXFIFOPAUSELWM);
v |= V_RXFIFOPAUSELWM(lwm / 8);
thres = mtu > thres ? (mtu - thres + 7) / 8 : 0;
thres = max(thres, 8U); /* need at least 8 */
t3_set_reg_field(adap, A_XGM_TXFIFO_CFG + mac->offset,
- V_TXFIFOTHRESH(M_TXFIFOTHRESH), V_TXFIFOTHRESH(thres));
+ V_TXFIFOTHRESH(M_TXFIFOTHRESH) | V_TXIPG(M_TXIPG),
+ V_TXFIFOTHRESH(thres) | V_TXIPG(1));
+
+ if (adap->params.rev > 0)
+ t3_write_reg(adap, A_XGM_PAUSE_TIMER + mac->offset,
+ (hwm - lwm) * 4 / 8);
+ t3_write_reg(adap, A_XGM_TX_PAUSE_QUANTA + mac->offset,
+ MAC_RXFIFO_SIZE * 4 * 8 / 512);
+
return 0;
}
V_PORTSPEED(M_PORTSPEED), val);
}
- val = t3_read_reg(adap, A_XGM_RXFIFO_CFG + oft);
- val &= ~V_RXFIFOPAUSEHWM(M_RXFIFOPAUSEHWM);
- if (fc & PAUSE_TX)
- val |= V_RXFIFOPAUSEHWM(G_RXFIFOPAUSELWM(val) + 128); /* +1KB */
- t3_write_reg(adap, A_XGM_RXFIFO_CFG + oft, val);
-
t3_set_reg_field(adap, A_XGM_TX_CFG + oft, F_TXPAUSEEN,
(fc & PAUSE_RX) ? F_TXPAUSEEN : 0);
return 0;
int idx = macidx(mac);
struct adapter *adap = mac->adapter;
unsigned int oft = mac->offset;
-
+ struct mac_stats *s = &mac->stats;
+
if (which & MAC_DIRECTION_TX) {
t3_write_reg(adap, A_XGM_TX_CTRL + oft, F_TXEN);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_CFG_CH0 + idx);
- t3_write_reg(adap, A_TP_PIO_DATA, 0xbf000001);
+ t3_write_reg(adap, A_TP_PIO_DATA, 0xc0ede401);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_MODE);
t3_set_reg_field(adap, A_TP_PIO_DATA, 1 << idx, 1 << idx);
+
+ t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_CNT_CH0 + idx);
+ mac->tx_mcnt = s->tx_frames;
+ mac->tx_tcnt = (G_TXDROPCNTCH0RCVD(t3_read_reg(adap,
+ A_TP_PIO_DATA)));
+ mac->tx_xcnt = (G_TXSPI4SOPCNT(t3_read_reg(adap,
+ A_XGM_TX_SPI4_SOP_EOP_CNT +
+ oft)));
+ mac->rx_mcnt = s->rx_frames;
+ mac->rx_xcnt = (G_TXSPI4SOPCNT(t3_read_reg(adap,
+ A_XGM_RX_SPI4_SOP_EOP_CNT +
+ oft)));
+ mac->txen = F_TXEN;
+ mac->toggle_cnt = 0;
}
if (which & MAC_DIRECTION_RX)
t3_write_reg(adap, A_XGM_RX_CTRL + oft, F_RXEN);
{
int idx = macidx(mac);
struct adapter *adap = mac->adapter;
+ int val;
if (which & MAC_DIRECTION_TX) {
t3_write_reg(adap, A_XGM_TX_CTRL + mac->offset, 0);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_CFG_CH0 + idx);
t3_write_reg(adap, A_TP_PIO_DATA, 0xc000001f);
t3_write_reg(adap, A_TP_PIO_ADDR, A_TP_TX_DROP_MODE);
- t3_set_reg_field(adap, A_TP_PIO_DATA, 1 << idx, 0);
+ t3_set_reg_field(adap, A_TP_PIO_DATA, 1 << idx, 1 << idx);
+ mac->txen = 0;
}
- if (which & MAC_DIRECTION_RX)
+ if (which & MAC_DIRECTION_RX) {
+ t3_set_reg_field(mac->adapter, A_XGM_RESET_CTRL + mac->offset,
+ F_PCS_RESET_, 0);
+ msleep(100);
t3_write_reg(adap, A_XGM_RX_CTRL + mac->offset, 0);
+ val = F_MAC_RESET_;
+ if (is_10G(adap))
+ val |= F_PCS_RESET_;
+ else if (uses_xaui(adap))
+ val |= F_PCS_RESET_ | F_XG2G_RESET_;
+ else
+ val |= F_RGMII_RESET_ | F_XG2G_RESET_;
+ t3_write_reg(mac->adapter, A_XGM_RESET_CTRL + mac->offset, val);
+ }
return 0;
}
+int t3b2_mac_watchdog_task(struct cmac *mac)
+{
+ struct adapter *adap = mac->adapter;
+ struct mac_stats *s = &mac->stats;
+ unsigned int tx_tcnt, tx_xcnt;
+ unsigned int tx_mcnt = s->tx_frames;
+ unsigned int rx_mcnt = s->rx_frames;
+ unsigned int rx_xcnt;
+ int status;
+
+ if (tx_mcnt == mac->tx_mcnt) {
+ tx_xcnt = (G_TXSPI4SOPCNT(t3_read_reg(adap,
+ A_XGM_TX_SPI4_SOP_EOP_CNT +
+ mac->offset)));
+ if (tx_xcnt == 0) {
+ t3_write_reg(adap, A_TP_PIO_ADDR,
+ A_TP_TX_DROP_CNT_CH0 + macidx(mac));
+ tx_tcnt = (G_TXDROPCNTCH0RCVD(t3_read_reg(adap,
+ A_TP_PIO_DATA)));
+ } else {
+ mac->toggle_cnt = 0;
+ return 0;
+ }
+ } else {
+ mac->toggle_cnt = 0;
+ return 0;
+ }
+
+ if (((tx_tcnt != mac->tx_tcnt) &&
+ (tx_xcnt == 0) && (mac->tx_xcnt == 0)) ||
+ ((mac->tx_mcnt == tx_mcnt) &&
+ (tx_xcnt != 0) && (mac->tx_xcnt != 0))) {
+ if (mac->toggle_cnt > 4)
+ status = 2;
+ else
+ status = 1;
+ } else {
+ mac->toggle_cnt = 0;
+ return 0;
+ }
+
+ if (rx_mcnt != mac->rx_mcnt)
+ rx_xcnt = (G_TXSPI4SOPCNT(t3_read_reg(adap,
+ A_XGM_RX_SPI4_SOP_EOP_CNT +
+ mac->offset)));
+ else
+ return 0;
+
+ if (mac->rx_mcnt != s->rx_frames && rx_xcnt == 0 && mac->rx_xcnt == 0)
+ status = 2;
+
+ mac->tx_tcnt = tx_tcnt;
+ mac->tx_xcnt = tx_xcnt;
+ mac->tx_mcnt = s->tx_frames;
+ mac->rx_xcnt = rx_xcnt;
+ mac->rx_mcnt = s->rx_frames;
+ if (status == 1) {
+ t3_write_reg(adap, A_XGM_TX_CTRL + mac->offset, 0);
+ t3_read_reg(adap, A_XGM_TX_CTRL + mac->offset); /* flush */
+ t3_write_reg(adap, A_XGM_TX_CTRL + mac->offset, mac->txen);
+ t3_read_reg(adap, A_XGM_TX_CTRL + mac->offset); /* flush */
+ mac->toggle_cnt++;
+ } else if (status == 2) {
+ t3b2_mac_reset(mac);
+ mac->toggle_cnt = 0;
+ }
+ return status;
+}
+
/*
* This function is called periodically to accumulate the current values of the
* RMON counters into the port statistics. Since the packet counters are only
RMON_UPDATE(mac, rx_symbol_errs, RX_SYM_CODE_ERR_FRAMES);
RMON_UPDATE(mac, rx_too_long, RX_OVERSIZE_FRAMES);
- mac->stats.rx_too_long += RMON_READ(mac, A_XGM_RX_MAX_PKT_SIZE_ERR_CNT);
+
+ v = RMON_READ(mac, A_XGM_RX_MAX_PKT_SIZE_ERR_CNT);
+ if (mac->adapter->params.rev == T3_REV_B2)
+ v &= 0x7fffffff;
+ mac->stats.rx_too_long += v;
RMON_UPDATE(mac, rx_frames_64, RX_64B_FRAMES);
RMON_UPDATE(mac, rx_frames_65_127, RX_65_127B_FRAMES);
depca_io_ports[i].device = pldev;
if (platform_device_add(pldev)) {
- platform_device_put(pldev);
depca_io_ports[i].device = NULL;
+ pldev->dev.platform_data = NULL;
+ platform_device_put(pldev);
continue;
}
for (i = 0; i < E1000_MAX_INTR; i++)
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring) &
- e1000_clean_tx_irq(adapter, adapter->tx_ring)))
+ !e1000_clean_tx_irq(adapter, adapter->tx_ring)))
break;
if (likely(adapter->itr_setting & 3))
for (i = 0; i < E1000_MAX_INTR; i++)
if (unlikely(!adapter->clean_rx(adapter, adapter->rx_ring) &
- e1000_clean_tx_irq(adapter, adapter->tx_ring)))
+ !e1000_clean_tx_irq(adapter, adapter->tx_ring)))
break;
if (likely(adapter->itr_setting & 3))
poll_dev->quota -= work_done;
/* If no Tx and not enough Rx work done, exit the polling mode */
- if ((tx_cleaned && (work_done < work_to_do)) ||
+ if ((!tx_cleaned && (work_done == 0)) ||
!netif_running(poll_dev)) {
quit_polling:
if (likely(adapter->itr_setting & 3))
#ifdef CONFIG_E1000_NAPI
unsigned int count = 0;
#endif
- boolean_t cleaned = TRUE;
+ boolean_t cleaned = FALSE;
unsigned int total_tx_bytes=0, total_tx_packets=0;
i = tx_ring->next_to_clean;
#ifdef CONFIG_E1000_NAPI
#define E1000_TX_WEIGHT 64
/* weight of a sort for tx, to avoid endless transmit cleanup */
- if (count++ == E1000_TX_WEIGHT) {
- cleaned = FALSE;
- break;
- }
+ if (count++ == E1000_TX_WEIGHT) break;
#endif
}
icr &= 0x70;
outb(icr, EWRK3_ICR); /* Disable all the IRQs */
- if (nicsr == (CSR_TXD | CSR_RXD))
+ if (nicsr != (CSR_TXD | CSR_RXD))
return -ENXIO;
-
/* Check that the EEPROM is alive and well and not living on Pluto... */
for (chksum = 0, i = 0; i < EEPROM_MAX; i += 2) {
union {
nv_drain_tx(dev);
nv_init_tx(dev);
setup_hw_rings(dev, NV_SETUP_TX_RING);
- netif_wake_queue(dev);
}
+ netif_wake_queue(dev);
+
/* 4) restart tx engine */
nv_start_tx(dev);
spin_unlock_irq(&np->lock);
pci_push(base);
if (!using_multi_irqs(dev)) {
- nv_nic_irq(0, dev);
+ if (np->desc_ver == DESC_VER_3)
+ nv_nic_irq_optimized(0, dev);
+ else
+ nv_nic_irq(0, dev);
if (np->msi_flags & NV_MSI_X_ENABLED)
enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
else
#include <linux/ioport.h>
#include <linux/string.h>
#include <linux/init.h>
-#include <asm/uaccess.h>
-#include <asm/io.h>
#include <linux/hdlcdrv.h>
#include <linux/baycom.h>
#include <linux/jiffies.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
/* --------------------------------------------------------------------- */
#define BAYCOM_DEBUG
skb->tc_verd = SET_TC_NCLS(skb->tc_verd);
stats->tx_packets++;
stats->tx_bytes +=skb->len;
+
+ skb->dev = __dev_get_by_index(skb->iif);
+ if (!skb->dev) {
+ dev_kfree_skb(skb);
+ stats->tx_dropped++;
+ break;
+ }
+ skb->iif = _dev->ifindex;
+
if (from & AT_EGRESS) {
dp->st_rx_frm_egr++;
dev_queue_xmit(skb);
} else if (from & AT_INGRESS) {
-
dp->st_rx_frm_ing++;
+ skb_pull(skb, skb->dev->hard_header_len);
netif_rx(skb);
- } else {
- dev_kfree_skb(skb);
- stats->tx_dropped++;
- }
+ } else
+ BUG();
}
if (netif_tx_trylock(_dev)) {
stats->rx_packets++;
stats->rx_bytes+=skb->len;
- if (!from || !skb->input_dev) {
-dropped:
+ if (!(from & (AT_INGRESS|AT_EGRESS)) || !skb->iif) {
dev_kfree_skb(skb);
stats->rx_dropped++;
return ret;
- } else {
- /*
- * note we could be going
- * ingress -> egress or
- * egress -> ingress
- */
- skb->dev = skb->input_dev;
- skb->input_dev = dev;
- if (from & AT_INGRESS) {
- skb_pull(skb, skb->dev->hard_header_len);
- } else {
- if (!(from & AT_EGRESS)) {
- goto dropped;
- }
- }
}
if (skb_queue_len(&dp->rq) >= dev->tx_queue_len) {
if (ret < 0)
break;
+
+ mdelay(10);
}
kfree(patch_block);
pxa_irda_set_speed(si, si->newspeed);
si->newspeed = 0;
} else {
+ int i = 64;
+
ICCR0 = 0;
pxa_irda_fir_dma_rx_start(si);
+ while ((ICSR1 & ICSR1_RNE) && i--)
+ (void)ICDR;
ICCR0 = ICCR0_ITR | ICCR0_RXE;
+
+ if (i < 0)
+ printk(KERN_ERR "pxa_ir: cannot clear Rx FIFO!\n");
}
netif_wake_queue(dev);
}
/* EIF(Error in FIFO/End in Frame) handler for FIR */
-static void pxa_irda_fir_irq_eif(struct pxa_irda *si, struct net_device *dev)
+static void pxa_irda_fir_irq_eif(struct pxa_irda *si, struct net_device *dev, int icsr0)
{
unsigned int len, stat, data;
}
if (stat & ICSR1_ROR) {
printk(KERN_DEBUG "pxa_ir: fir receive overrun\n");
- si->stats.rx_frame_errors++;
+ si->stats.rx_over_errors++;
}
} else {
si->dma_rx_buff[len++] = data;
if (stat & ICSR1_EOF) {
/* end of frame. */
- struct sk_buff *skb = alloc_skb(len+1,GFP_ATOMIC);
+ struct sk_buff *skb;
+
+ if (icsr0 & ICSR0_FRE) {
+ printk(KERN_ERR "pxa_ir: dropping erroneous frame\n");
+ si->stats.rx_dropped++;
+ return;
+ }
+
+ skb = alloc_skb(len+1,GFP_ATOMIC);
if (!skb) {
printk(KERN_ERR "pxa_ir: fir out of memory for receive skb\n");
si->stats.rx_dropped++;
{
struct net_device *dev = dev_id;
struct pxa_irda *si = netdev_priv(dev);
- int icsr0;
+ int icsr0, i = 64;
/* stop RX DMA */
DCSR(si->rxdma) &= ~DCSR_RUN;
if (icsr0 & ICSR0_EIF) {
/* An error in FIFO occured, or there is a end of frame */
- pxa_irda_fir_irq_eif(si, dev);
+ pxa_irda_fir_irq_eif(si, dev, icsr0);
}
ICCR0 = 0;
pxa_irda_fir_dma_rx_start(si);
+ while ((ICSR1 & ICSR1_RNE) && i--)
+ (void)ICDR;
ICCR0 = ICCR0_ITR | ICCR0_RXE;
+ if (i < 0)
+ printk(KERN_ERR "pxa_ir: cannot clear Rx FIFO!\n");
+
return IRQ_HANDLED;
}
spin_lock_init(&mp->lock);
- port_num = pd->port_number;
+ port_num = mp->port_num = pd->port_number;
/* set default config values */
eth_port_uc_addr_get(dev, dev->dev_addr);
duplex = pd->duplex;
speed = pd->speed;
- mp->port_num = port_num;
-
/* Hook up MII support for ethtool */
mp->mii.dev = dev;
mp->mii.mdio_read = mv643xx_mdio_read;
return 0;
}
+static void mv643xx_eth_shutdown(struct platform_device *pdev)
+{
+ struct net_device *dev = platform_get_drvdata(pdev);
+ struct mv643xx_private *mp = netdev_priv(dev);
+ unsigned int port_num = mp->port_num;
+
+ /* Mask all interrupts on ethernet port */
+ mv_write(MV643XX_ETH_INTERRUPT_MASK_REG(port_num), 0);
+ mv_read (MV643XX_ETH_INTERRUPT_MASK_REG(port_num));
+
+ eth_port_reset(port_num);
+}
+
static struct platform_driver mv643xx_eth_driver = {
.probe = mv643xx_eth_probe,
.remove = mv643xx_eth_remove,
+ .shutdown = mv643xx_eth_shutdown,
.driver = {
.name = MV643XX_ETH_NAME,
},
#include "myri10ge_mcp.h"
#include "myri10ge_mcp_gen_header.h"
-#define MYRI10GE_VERSION_STR "1.2.0"
+#define MYRI10GE_VERSION_STR "1.3.0-1.233"
MODULE_DESCRIPTION("Myricom 10G driver (10GbE)");
MODULE_AUTHOR("Maintainer: help@myri.com");
module_param(myri10ge_msi, int, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(myri10ge_msi, "Enable Message Signalled Interrupts\n");
-static int myri10ge_intr_coal_delay = 25;
+static int myri10ge_intr_coal_delay = 75;
module_param(myri10ge_intr_coal_delay, int, S_IRUGO);
MODULE_PARM_DESC(myri10ge_intr_coal_delay, "Interrupt coalescing delay\n");
module_param(myri10ge_fill_thresh, int, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(myri10ge_fill_thresh, "Number of empty rx slots allowed\n");
-static int myri10ge_wcfifo = 1;
+static int myri10ge_wcfifo = 0;
module_param(myri10ge_wcfifo, int, S_IRUGO);
MODULE_PARM_DESC(myri10ge_wcfifo, "Enable WC Fifo when WC is enabled\n");
/* try to refill entire ring */
while (rx->fill_cnt != (rx->cnt + rx->mask + 1)) {
idx = rx->fill_cnt & rx->mask;
-
- if ((bytes < MYRI10GE_ALLOC_SIZE / 2) &&
- (rx->page_offset + bytes <= MYRI10GE_ALLOC_SIZE)) {
+ if (rx->page_offset + bytes <= MYRI10GE_ALLOC_SIZE) {
/* we can use part of previous page */
get_page(rx->page);
} else {
/* start next packet on a cacheline boundary */
rx->page_offset += SKB_DATA_ALIGN(bytes);
+
+#if MYRI10GE_ALLOC_SIZE > 4096
+ /* don't cross a 4KB boundary */
+ if ((rx->page_offset >> 12) !=
+ ((rx->page_offset + bytes - 1) >> 12))
+ rx->page_offset = (rx->page_offset + 4096) & ~4095;
+#endif
rx->fill_cnt++;
/* copy 8 descriptors to the firmware at a time */
mss = 0;
max_segments = MXGEFW_MAX_SEND_DESC;
- if (skb->len > (dev->mtu + ETH_HLEN)) {
+ if (skb_is_gso(skb)) {
mss = skb_shinfo(skb)->gso_size;
- if (mss != 0)
- max_segments = MYRI10GE_MAX_SEND_DESC_TSO;
+ max_segments = MYRI10GE_MAX_SEND_DESC_TSO;
}
if ((unlikely(avail < max_segments))) {
#define PCI_DEVICE_ID_INTEL_E5000_PCIE23 0x25f7
#define PCI_DEVICE_ID_INTEL_E5000_PCIE47 0x25fa
+#define PCI_DEVICE_ID_INTEL_6300ESB_PCIEE1 0x3510
+#define PCI_DEVICE_ID_INTEL_6300ESB_PCIEE4 0x351b
+#define PCI_DEVICE_ID_INTEL_E3000_PCIE 0x2779
+#define PCI_DEVICE_ID_INTEL_E3010_PCIE 0x277a
+#define PCI_DEVICE_ID_SERVERWORKS_HT2100_PCIE_FIRST 0x140
+#define PCI_DEVICE_ID_SERVERWORKS_HT2100_PCIE_LAST 0x142
static void myri10ge_select_firmware(struct myri10ge_priv *mgp)
{
((bridge->vendor == PCI_VENDOR_ID_SERVERWORKS
&& bridge->device ==
PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE)
+ /* ServerWorks HT2100 */
+ || (bridge->vendor == PCI_VENDOR_ID_SERVERWORKS
+ && bridge->device >=
+ PCI_DEVICE_ID_SERVERWORKS_HT2100_PCIE_FIRST
+ && bridge->device <=
+ PCI_DEVICE_ID_SERVERWORKS_HT2100_PCIE_LAST)
+ /* All Intel E3000/E3010 PCIE ports */
+ || (bridge->vendor == PCI_VENDOR_ID_INTEL
+ && (bridge->device ==
+ PCI_DEVICE_ID_INTEL_E3000_PCIE
+ || bridge->device ==
+ PCI_DEVICE_ID_INTEL_E3010_PCIE))
+ /* All Intel 6310/6311/6321ESB PCIE ports */
+ || (bridge->vendor == PCI_VENDOR_ID_INTEL
+ && bridge->device >=
+ PCI_DEVICE_ID_INTEL_6300ESB_PCIEE1
+ && bridge->device <=
+ PCI_DEVICE_ID_INTEL_6300ESB_PCIEE4)
/* All Intel E5000 PCIE ports */
|| (bridge->vendor == PCI_VENDOR_ID_INTEL
&& bridge->device >=
/* Enable interrupts by setting the interrupt mask. */
writel(DEFAULT_INTR, ioaddr + IntrMask);
- writel(1, ioaddr + IntrEnable);
+ natsemi_irq_enable(dev);
writel(RxOn | TxOn, ioaddr + ChipCmd);
writel(StatsClear, ioaddr + StatsCtrl); /* Clear Stats */
struct netdev_private *np = netdev_priv(dev);
void __iomem * ioaddr = ns_ioaddr(dev);
- if (np->hands_off)
+ /* Reading IntrStatus automatically acknowledges so don't do
+ * that while interrupts are disabled, (for example, while a
+ * poll is scheduled). */
+ if (np->hands_off || !readl(ioaddr + IntrEnable))
return IRQ_NONE;
- /* Reading automatically acknowledges. */
np->intr_status = readl(ioaddr + IntrStatus);
+ if (!np->intr_status)
+ return IRQ_NONE;
+
if (netif_msg_intr(np))
printk(KERN_DEBUG
"%s: Interrupt, status %#08x, mask %#08x.\n",
dev->name, np->intr_status,
readl(ioaddr + IntrMask));
- if (!np->intr_status)
- return IRQ_NONE;
-
prefetch(&np->rx_skbuff[np->cur_rx % RX_RING_SIZE]);
if (netif_rx_schedule_prep(dev)) {
/* Disable interrupts and register for poll */
natsemi_irq_disable(dev);
__netif_rx_schedule(dev);
- }
+ } else
+ printk(KERN_WARNING
+ "%s: Ignoring interrupt, status %#08x, mask %#08x.\n",
+ dev->name, np->intr_status,
+ readl(ioaddr + IntrMask));
+
return IRQ_HANDLED;
}
int work_done = 0;
do {
+ if (netif_msg_intr(np))
+ printk(KERN_DEBUG
+ "%s: Poll, status %#08x, mask %#08x.\n",
+ dev->name, np->intr_status,
+ readl(ioaddr + IntrMask));
+
+ /* netdev_rx() may read IntrStatus again if the RX state
+ * machine falls over so do it first. */
+ if (np->intr_status &
+ (IntrRxDone | IntrRxIntr | RxStatusFIFOOver |
+ IntrRxErr | IntrRxOverrun)) {
+ netdev_rx(dev, &work_done, work_to_do);
+ }
+
if (np->intr_status &
(IntrTxDone | IntrTxIntr | IntrTxIdle | IntrTxErr)) {
spin_lock(&np->lock);
if (np->intr_status & IntrAbnormalSummary)
netdev_error(dev, np->intr_status);
- if (np->intr_status &
- (IntrRxDone | IntrRxIntr | RxStatusFIFOOver |
- IntrRxErr | IntrRxOverrun)) {
- netdev_rx(dev, &work_done, work_to_do);
- }
-
*budget -= work_done;
dev->quota -= work_done;
#ifdef CONFIG_NET_POLL_CONTROLLER
static void natsemi_poll_controller(struct net_device *dev)
{
- struct netdev_private *np = netdev_priv(dev);
-
disable_irq(dev->irq);
-
- /*
- * A real interrupt might have already reached us at this point
- * but NAPI might still haven't called us back. As the interrupt
- * status register is cleared by reading, we should prevent an
- * interrupt loss in this case...
- */
- if (!np->intr_status)
- intr_handler(dev->irq, dev);
-
+ intr_handler(dev->irq, dev);
enable_irq(dev->irq);
}
#endif
* Could be used to send a netlink message.
*/
writel(WOLPkt | LinkChange, ioaddr + IntrMask);
- writel(1, ioaddr + IntrEnable);
+ natsemi_irq_enable(dev);
}
}
disable_irq(dev->irq);
spin_lock_irq(&np->lock);
- writel(0, ioaddr + IntrEnable);
+ natsemi_irq_disable(dev);
np->hands_off = 1;
natsemi_stop_rxtx(dev);
netif_stop_queue(dev);
#define MPORT_SINGLE_FUNCTION_MODE 0x1111
extern unsigned long long netxen_dma_mask;
+extern unsigned long last_schedule_time;
/*
* NetXen host-peg signal message structure
}
printk(KERN_INFO "%s: flash unlocked. \n",
netxen_nic_driver_name);
+ last_schedule_time = jiffies;
ret = netxen_flash_erase_secondary(adapter);
if (ret != FLASH_SUCCESS) {
printk(KERN_ERR "%s: Flash erase failed.\n",
{
struct netxen_adapter *adapter = port->adapter;
new_mtu += NETXEN_NIU_HDRSIZE + NETXEN_NIU_TLRSIZE;
- netxen_nic_write_w0(adapter, NETXEN_NIU_XGE_MAX_FRAME_SIZE, new_mtu);
+ if (port->portnum == 0)
+ netxen_nic_write_w0(adapter, NETXEN_NIU_XGE_MAX_FRAME_SIZE, new_mtu);
+ else if (port->portnum == 1)
+ netxen_nic_write_w0(adapter, NETXEN_NIU_XG1_MAX_FRAME_SIZE, new_mtu);
return 0;
}
u32 data;
};
+unsigned long last_schedule_time;
+
#define NETXEN_MAX_CRB_XFORM 60
static unsigned int crb_addr_xform[NETXEN_MAX_CRB_XFORM];
#define NETXEN_ADDR_ERROR (0xffffffff)
static inline int
do_rom_fast_read(struct netxen_adapter *adapter, int addr, int *valp)
{
+ if (jiffies > (last_schedule_time + (8 * HZ))) {
+ last_schedule_time = jiffies;
+ schedule();
+ }
+
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ADDRESS, addr);
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ABYTE_CNT, 3);
- udelay(70); /* prevent bursting on CRB */
+ udelay(100); /* prevent bursting on CRB */
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_DUMMY_BYTE_CNT, 0);
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_INSTR_OPCODE, 0xb);
if (netxen_wait_rom_done(adapter)) {
}
/* reset abyte_cnt and dummy_byte_cnt */
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_ABYTE_CNT, 0);
- udelay(70); /* prevent bursting on CRB */
+ udelay(100); /* prevent bursting on CRB */
netxen_nic_reg_write(adapter, NETXEN_ROMUSB_ROM_DUMMY_BYTE_CNT, 0);
*valp = netxen_nic_reg_read(adapter, NETXEN_ROMUSB_ROM_RDATA);
for (addridx = addr; addridx < (addr + size); addridx += 4) {
ret = do_rom_fast_read(adapter, addridx, (int *)bytes);
+ *(int *)bytes = cpu_to_le32(*(int *)bytes);
if (ret != 0)
break;
bytes += 4;
int timeout = 0;
int data;
- data = *(u32*)bytes;
+ data = le32_to_cpu((*(u32*)bytes));
ret = do_rom_fast_write(adapter, addridx, data);
if (ret < 0)
tp->chipset,
rtl_chip_info[tp->chipset].name);
- i = register_netdev (dev);
- if (i)
+ rc = register_netdev (dev);
+ if (rc)
goto err_out_unmap;
DPRINTK ("EXIT, returning 0\n");
======================================================================*/
-static int ibmtr_attach(struct pcmcia_device *link)
+static int __devinit ibmtr_attach(struct pcmcia_device *link)
{
ibmtr_dev_t *info;
struct net_device *dev;
#define CS_CHECK(fn, ret) \
do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
-static int ibmtr_config(struct pcmcia_device *link)
+static int __devinit ibmtr_config(struct pcmcia_device *link)
{
ibmtr_dev_t *info = link->priv;
struct net_device *dev = info->dev;
/* check for address/control and protocol compression */
p = skb->data;
- if (p[0] == PPP_ALLSTATIONS && p[1] == PPP_UI) {
+ if (p[0] == PPP_ALLSTATIONS) {
/* chop off address/control */
- if (skb->len < 3)
+ if (p[1] != PPP_UI || skb->len < 3)
goto err;
p = skb_pull(skb, 2);
}
ppp->active_filter = NULL;
#endif /* CONFIG_PPP_FILTER */
+ if (ppp->xmit_pending)
+ kfree_skb(ppp->xmit_pending);
+
kfree(ppp);
}
return 0;
}
+/*
+ * Caller holds hw_lock.
+ */
+static void ql_update_small_bufq_prod_index(struct ql3_adapter *qdev)
+{
+ struct ql3xxx_port_registers __iomem *port_regs = qdev->mem_map_registers;
+ if (qdev->small_buf_release_cnt >= 16) {
+ while (qdev->small_buf_release_cnt >= 16) {
+ qdev->small_buf_q_producer_index++;
+
+ if (qdev->small_buf_q_producer_index ==
+ NUM_SBUFQ_ENTRIES)
+ qdev->small_buf_q_producer_index = 0;
+ qdev->small_buf_release_cnt -= 8;
+ }
+ wmb();
+ writel(qdev->small_buf_q_producer_index,
+ &port_regs->CommonRegs.rxSmallQProducerIndex);
+ }
+}
+
/*
* Caller holds hw_lock.
*/
lrg_buf_q_ele = qdev->lrg_buf_q_virt_addr;
}
}
-
+ wmb();
qdev->lrg_buf_next_free = lrg_buf_q_ele;
-
- ql_write_common_reg(qdev,
- &port_regs->CommonRegs.
- rxLargeQProducerIndex,
- qdev->lrg_buf_q_producer_index);
+ writel(qdev->lrg_buf_q_producer_index,
+ &port_regs->CommonRegs.rxLargeQProducerIndex);
}
}
u16 checksum = le16_to_cpu(ib_ip_rsp_ptr->checksum);
if (checksum &
(IB_IP_IOCB_RSP_3032_ICE |
- IB_IP_IOCB_RSP_3032_CE |
- IB_IP_IOCB_RSP_3032_NUC)) {
+ IB_IP_IOCB_RSP_3032_CE)) {
printk(KERN_ERR
"%s: Bad checksum for this %s packet, checksum = %x.\n",
__func__,
((checksum &
IB_IP_IOCB_RSP_3032_TCP) ? "TCP" :
"UDP"),checksum);
- } else if (checksum & IB_IP_IOCB_RSP_3032_TCP) {
+ } else if ((checksum & IB_IP_IOCB_RSP_3032_TCP) ||
+ (checksum & IB_IP_IOCB_RSP_3032_UDP &&
+ !(checksum & IB_IP_IOCB_RSP_3032_NUC))) {
skb2->ip_summed = CHECKSUM_UNNECESSARY;
- }
+ }
}
skb2->dev = qdev->ndev;
skb2->protocol = eth_type_trans(skb2, qdev->ndev);
static int ql_tx_rx_clean(struct ql3_adapter *qdev,
int *tx_cleaned, int *rx_cleaned, int work_to_do)
{
- struct ql3xxx_port_registers __iomem *port_regs = qdev->mem_map_registers;
struct net_rsp_iocb *net_rsp;
struct net_device *ndev = qdev->ndev;
- unsigned long hw_flags;
int work_done = 0;
- u32 rsp_producer_index = le32_to_cpu(*(qdev->prsp_producer_index));
-
/* While there are entries in the completion queue. */
- while ((rsp_producer_index !=
+ while ((le32_to_cpu(*(qdev->prsp_producer_index)) !=
qdev->rsp_consumer_index) && (work_done < work_to_do)) {
net_rsp = qdev->rsp_current;
work_done = *tx_cleaned + *rx_cleaned;
}
- if(work_done) {
- spin_lock_irqsave(&qdev->hw_lock, hw_flags);
-
- ql_update_lrg_bufq_prod_index(qdev);
-
- if (qdev->small_buf_release_cnt >= 16) {
- while (qdev->small_buf_release_cnt >= 16) {
- qdev->small_buf_q_producer_index++;
-
- if (qdev->small_buf_q_producer_index ==
- NUM_SBUFQ_ENTRIES)
- qdev->small_buf_q_producer_index = 0;
- qdev->small_buf_release_cnt -= 8;
- }
-
- wmb();
- ql_write_common_reg(qdev,
- &port_regs->CommonRegs.
- rxSmallQProducerIndex,
- qdev->small_buf_q_producer_index);
-
- }
-
- spin_unlock_irqrestore(&qdev->hw_lock, hw_flags);
- }
-
- return *tx_cleaned + *rx_cleaned;
+ return work_done;
}
static int ql_poll(struct net_device *ndev, int *budget)
netif_rx_complete(ndev);
spin_lock_irqsave(&qdev->hw_lock, hw_flags);
- ql_write_common_reg(qdev,
- &port_regs->CommonRegs.rspQConsumerIndex,
- qdev->rsp_consumer_index);
+ ql_update_small_bufq_prod_index(qdev);
+ ql_update_lrg_bufq_prod_index(qdev);
+ writel(qdev->rsp_consumer_index,
+ &port_regs->CommonRegs.rspQConsumerIndex);
spin_unlock_irqrestore(&qdev->hw_lock, hw_flags);
ql_enable_interrupts(qdev);
int seg_cnt, seg = 0;
int frag_cnt = (int)skb_shinfo(skb)->nr_frags;
- seg_cnt = tx_cb->seg_count = ql_get_seg_count(qdev,
- (skb_shinfo(skb)->nr_frags));
- if(seg_cnt == -1) {
- printk(KERN_ERR PFX"%s: invalid segment count!\n",__func__);
- return NETDEV_TX_BUSY;
- }
+ seg_cnt = tx_cb->seg_count;
/*
* Map the skb buffer first.
*/
pci_unmap_addr_set(&tx_cb->map[seg], mapaddr,
map);
pci_unmap_len_set(&tx_cb->map[seg], maplen,
- len);
+ sizeof(struct oal));
oal_entry = (struct oal_entry *)oal;
oal++;
seg++;
}
mac_iocb_ptr = tx_cb->queue_entry;
+ memset((void *)mac_iocb_ptr, 0, sizeof(struct ob_mac_iocb_req));
mac_iocb_ptr->opcode = qdev->mac_ob_opcode;
mac_iocb_ptr->flags = OB_MAC_IOCB_REQ_X;
mac_iocb_ptr->flags |= qdev->mb_bit_mask;
goto out;
}
- if (qdev->mac_index)
- ql_write_page0_reg(qdev,
- &port_regs->mac1MaxFrameLengthReg,
- qdev->max_frame_size);
- else
- ql_write_page0_reg(qdev,
- &port_regs->mac0MaxFrameLengthReg,
- qdev->max_frame_size);
-
value = qdev->nvram_data.tcpMaxWindowSize;
ql_write_page0_reg(qdev, &port_regs->tcpMaxWindow, value);
ql_sem_unlock(qdev, QL_FLASH_SEM_MASK);
}
+ if (qdev->mac_index)
+ ql_write_page0_reg(qdev,
+ &port_regs->mac1MaxFrameLengthReg,
+ qdev->max_frame_size);
+ else
+ ql_write_page0_reg(qdev,
+ &port_regs->mac0MaxFrameLengthReg,
+ qdev->max_frame_size);
if(ql_sem_spinlock(qdev, QL_PHY_GIO_SEM_MASK,
(QL_RESOURCE_BITS_BASE_CODE | (qdev->mac_index) *
if (qdev->device_id == QL3032_DEVICE_ID) {
value =
(QL3032_PORT_CONTROL_EF | QL3032_PORT_CONTROL_KIE |
- QL3032_PORT_CONTROL_EIv6 | QL3032_PORT_CONTROL_EIv4);
+ QL3032_PORT_CONTROL_EIv6 | QL3032_PORT_CONTROL_EIv4 |
+ QL3032_PORT_CONTROL_ET);
ql_write_page0_reg(qdev, &port_regs->functionControl,
((value << 16) | value));
} else {
/* Transmit and Receive Buffers */
#define NUM_LBUFQ_ENTRIES 128
-#define JUMBO_NUM_LBUFQ_ENTRIES \
-(NUM_LBUFQ_ENTRIES/(JUMBO_MTU_SIZE/NORMAL_MTU_SIZE))
+#define JUMBO_NUM_LBUFQ_ENTRIES 32
#define NUM_SBUFQ_ENTRIES 64
#define QL_SMALL_BUFFER_SIZE 32
#define QL_ADDR_ELE_PER_BUFQ_ENTRY \
#include <linux/init.h>
#include <linux/dma-mapping.h>
+#include <asm/system.h>
#include <asm/io.h>
#include <asm/irq.h>
void __iomem *);
static int rtl8169_change_mtu(struct net_device *dev, int new_mtu);
static void rtl8169_down(struct net_device *dev);
+static void rtl8169_rx_clear(struct rtl8169_private *tp);
#ifdef CONFIG_R8169_NAPI
static int rtl8169_poll(struct net_device *dev, int *budget);
{
struct rtl8169_private *tp = netdev_priv(dev);
struct pci_dev *pdev = tp->pci_dev;
- int retval;
+ int retval = -ENOMEM;
- rtl8169_set_rxbufsize(tp, dev);
-
- retval =
- request_irq(dev->irq, rtl8169_interrupt, IRQF_SHARED, dev->name, dev);
- if (retval < 0)
- goto out;
- retval = -ENOMEM;
+ rtl8169_set_rxbufsize(tp, dev);
/*
* Rx and Tx desscriptors needs 256 bytes alignment.
tp->TxDescArray = pci_alloc_consistent(pdev, R8169_TX_RING_BYTES,
&tp->TxPhyAddr);
if (!tp->TxDescArray)
- goto err_free_irq;
+ goto out;
tp->RxDescArray = pci_alloc_consistent(pdev, R8169_RX_RING_BYTES,
&tp->RxPhyAddr);
if (!tp->RxDescArray)
- goto err_free_tx;
+ goto err_free_tx_0;
retval = rtl8169_init_ring(dev);
if (retval < 0)
- goto err_free_rx;
+ goto err_free_rx_1;
INIT_DELAYED_WORK(&tp->task, NULL);
+ smp_mb();
+
+ retval = request_irq(dev->irq, rtl8169_interrupt, IRQF_SHARED,
+ dev->name, dev);
+ if (retval < 0)
+ goto err_release_ring_2;
+
rtl8169_hw_start(dev);
rtl8169_request_timer(dev);
out:
return retval;
-err_free_rx:
+err_release_ring_2:
+ rtl8169_rx_clear(tp);
+err_free_rx_1:
pci_free_consistent(pdev, R8169_RX_RING_BYTES, tp->RxDescArray,
tp->RxPhyAddr);
-err_free_tx:
+err_free_tx_0:
pci_free_consistent(pdev, R8169_TX_RING_BYTES, tp->TxDescArray,
tp->TxPhyAddr);
-err_free_irq:
- free_irq(dev->irq, dev);
goto out;
}
void __iomem *ioaddr = tp->mmio_addr;
if (!netif_running(dev))
- goto out;
+ goto out_pci_suspend;
netif_device_detach(dev);
netif_stop_queue(dev);
spin_unlock_irq(&tp->lock);
+out_pci_suspend:
pci_save_state(pdev);
pci_enable_wake(pdev, pci_choose_state(pdev, state), tp->wol_enabled);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
-out:
+
return 0;
}
{
struct net_device *dev = pci_get_drvdata(pdev);
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ pci_enable_wake(pdev, PCI_D0, 0);
+
if (!netif_running(dev))
goto out;
netif_device_attach(dev);
- pci_set_power_state(pdev, PCI_D0);
- pci_restore_state(pdev);
- pci_enable_wake(pdev, PCI_D0, 0);
-
rtl8169_schedule_work(dev, rtl8169_reset_task);
out:
return 0;
static void evm_saa9730_enable_lan_int(struct lan_saa9730_private *lp)
{
- outl(readl(&lp->evm_saa9730_regs->InterruptBlock1) | EVM_LAN_INT,
- &lp->evm_saa9730_regs->InterruptBlock1);
- outl(readl(&lp->evm_saa9730_regs->InterruptStatus1) | EVM_LAN_INT,
- &lp->evm_saa9730_regs->InterruptStatus1);
- outl(readl(&lp->evm_saa9730_regs->InterruptEnable1) | EVM_LAN_INT |
- EVM_MASTER_EN, &lp->evm_saa9730_regs->InterruptEnable1);
+ writel(readl(&lp->evm_saa9730_regs->InterruptBlock1) | EVM_LAN_INT,
+ &lp->evm_saa9730_regs->InterruptBlock1);
+ writel(readl(&lp->evm_saa9730_regs->InterruptStatus1) | EVM_LAN_INT,
+ &lp->evm_saa9730_regs->InterruptStatus1);
+ writel(readl(&lp->evm_saa9730_regs->InterruptEnable1) | EVM_LAN_INT |
+ EVM_MASTER_EN, &lp->evm_saa9730_regs->InterruptEnable1);
}
static void evm_saa9730_disable_lan_int(struct lan_saa9730_private *lp)
{
- outl(readl(&lp->evm_saa9730_regs->InterruptBlock1) & ~EVM_LAN_INT,
- &lp->evm_saa9730_regs->InterruptBlock1);
- outl(readl(&lp->evm_saa9730_regs->InterruptEnable1) & ~EVM_LAN_INT,
- &lp->evm_saa9730_regs->InterruptEnable1);
+ writel(readl(&lp->evm_saa9730_regs->InterruptBlock1) & ~EVM_LAN_INT,
+ &lp->evm_saa9730_regs->InterruptBlock1);
+ writel(readl(&lp->evm_saa9730_regs->InterruptEnable1) & ~EVM_LAN_INT,
+ &lp->evm_saa9730_regs->InterruptEnable1);
}
static void evm_saa9730_clear_lan_int(struct lan_saa9730_private *lp)
{
- outl(EVM_LAN_INT, &lp->evm_saa9730_regs->InterruptStatus1);
+ writel(EVM_LAN_INT, &lp->evm_saa9730_regs->InterruptStatus1);
}
static void evm_saa9730_block_lan_int(struct lan_saa9730_private *lp)
{
- outl(readl(&lp->evm_saa9730_regs->InterruptBlock1) & ~EVM_LAN_INT,
- &lp->evm_saa9730_regs->InterruptBlock1);
+ writel(readl(&lp->evm_saa9730_regs->InterruptBlock1) & ~EVM_LAN_INT,
+ &lp->evm_saa9730_regs->InterruptBlock1);
}
static void evm_saa9730_unblock_lan_int(struct lan_saa9730_private *lp)
{
- outl(readl(&lp->evm_saa9730_regs->InterruptBlock1) | EVM_LAN_INT,
- &lp->evm_saa9730_regs->InterruptBlock1);
+ writel(readl(&lp->evm_saa9730_regs->InterruptBlock1) | EVM_LAN_INT,
+ &lp->evm_saa9730_regs->InterruptBlock1);
}
static void __attribute_used__ show_saa9730_regs(struct lan_saa9730_private *lp)
printk("lp->lan_saa9730_regs->RxStatus = %x\n",
readl(&lp->lan_saa9730_regs->RxStatus));
for (i = 0; i < LAN_SAA9730_CAM_DWORDS; i++) {
- outl(i, &lp->lan_saa9730_regs->CamAddress);
+ writel(i, &lp->lan_saa9730_regs->CamAddress);
printk("lp->lan_saa9730_regs->CamData = %x\n",
readl(&lp->lan_saa9730_regs->CamData));
}
* Set rx buffer A and rx buffer B to point to the first two buffer
* spaces.
*/
- outl(lp->dma_addr + rxoffset,
- &lp->lan_saa9730_regs->RxBuffA);
- outl(lp->dma_addr + rxoffset +
- LAN_SAA9730_PACKET_SIZE * LAN_SAA9730_RCV_Q_SIZE,
- &lp->lan_saa9730_regs->RxBuffB);
+ writel(lp->dma_addr + rxoffset, &lp->lan_saa9730_regs->RxBuffA);
+ writel(lp->dma_addr + rxoffset +
+ LAN_SAA9730_PACKET_SIZE * LAN_SAA9730_RCV_Q_SIZE,
+ &lp->lan_saa9730_regs->RxBuffB);
/*
* Set txm_buf_a and txm_buf_b to point to the first two buffer
* space
*/
- outl(lp->dma_addr + txoffset,
- &lp->lan_saa9730_regs->TxBuffA);
- outl(lp->dma_addr + txoffset +
- LAN_SAA9730_PACKET_SIZE * LAN_SAA9730_TXM_Q_SIZE,
- &lp->lan_saa9730_regs->TxBuffB);
+ writel(lp->dma_addr + txoffset,
+ &lp->lan_saa9730_regs->TxBuffA);
+ writel(lp->dma_addr + txoffset +
+ LAN_SAA9730_PACKET_SIZE * LAN_SAA9730_TXM_Q_SIZE,
+ &lp->lan_saa9730_regs->TxBuffB);
/* Set packet number */
- outl((lp->DmaRcvPackets << PK_COUNT_RX_A_SHF) |
- (lp->DmaRcvPackets << PK_COUNT_RX_B_SHF) |
- (lp->DmaTxmPackets << PK_COUNT_TX_A_SHF) |
- (lp->DmaTxmPackets << PK_COUNT_TX_B_SHF),
- &lp->lan_saa9730_regs->PacketCount);
+ writel((lp->DmaRcvPackets << PK_COUNT_RX_A_SHF) |
+ (lp->DmaRcvPackets << PK_COUNT_RX_B_SHF) |
+ (lp->DmaTxmPackets << PK_COUNT_TX_A_SHF) |
+ (lp->DmaTxmPackets << PK_COUNT_TX_B_SHF),
+ &lp->lan_saa9730_regs->PacketCount);
return 0;
for (i = 0; i < LAN_SAA9730_CAM_DWORDS; i++) {
/* First set address to where data is written */
- outl(i, &lp->lan_saa9730_regs->CamAddress);
- outl((NetworkAddress[0] << 24) | (NetworkAddress[1] << 16)
- | (NetworkAddress[2] << 8) | NetworkAddress[3],
- &lp->lan_saa9730_regs->CamData);
+ writel(i, &lp->lan_saa9730_regs->CamAddress);
+ writel((NetworkAddress[0] << 24) | (NetworkAddress[1] << 16) |
+ (NetworkAddress[2] << 8) | NetworkAddress[3],
+ &lp->lan_saa9730_regs->CamData);
NetworkAddress += 4;
}
return 0;
}
/* Now set the control and address register. */
- outl(MD_CA_BUSY | PHY_STATUS | PHY_ADDRESS << MD_CA_PHY_SHF,
- &lp->lan_saa9730_regs->StationMgmtCtl);
+ writel(MD_CA_BUSY | PHY_STATUS | PHY_ADDRESS << MD_CA_PHY_SHF,
+ &lp->lan_saa9730_regs->StationMgmtCtl);
/* check link status, spin here till station is not busy */
i = 0;
/* Link is down, reset the PHY first. */
/* set PHY address = 'CONTROL' */
- outl(PHY_ADDRESS << MD_CA_PHY_SHF | MD_CA_WR | PHY_CONTROL,
- &lp->lan_saa9730_regs->StationMgmtCtl);
+ writel(PHY_ADDRESS << MD_CA_PHY_SHF | MD_CA_WR | PHY_CONTROL,
+ &lp->lan_saa9730_regs->StationMgmtCtl);
/* Wait for 1 ms. */
mdelay(1);
/* set 'CONTROL' = force reset and renegotiate */
- outl(PHY_CONTROL_RESET | PHY_CONTROL_AUTO_NEG |
- PHY_CONTROL_RESTART_AUTO_NEG,
- &lp->lan_saa9730_regs->StationMgmtData);
+ writel(PHY_CONTROL_RESET | PHY_CONTROL_AUTO_NEG |
+ PHY_CONTROL_RESTART_AUTO_NEG,
+ &lp->lan_saa9730_regs->StationMgmtData);
/* Wait for 50 ms. */
mdelay(50);
/* set 'BUSY' to start operation */
- outl(MD_CA_BUSY | PHY_ADDRESS << MD_CA_PHY_SHF | MD_CA_WR |
- PHY_CONTROL, &lp->lan_saa9730_regs->StationMgmtCtl);
+ writel(MD_CA_BUSY | PHY_ADDRESS << MD_CA_PHY_SHF | MD_CA_WR |
+ PHY_CONTROL, &lp->lan_saa9730_regs->StationMgmtCtl);
/* await completion */
i = 0;
for (l = 0; l < 2; l++) {
/* set PHY address = 'STATUS' */
- outl(MD_CA_BUSY | PHY_ADDRESS << MD_CA_PHY_SHF |
- PHY_STATUS,
- &lp->lan_saa9730_regs->StationMgmtCtl);
+ writel(MD_CA_BUSY | PHY_ADDRESS << MD_CA_PHY_SHF |
+ PHY_STATUS,
+ &lp->lan_saa9730_regs->StationMgmtCtl);
/* await completion */
i = 0;
static int lan_saa9730_control_init(struct lan_saa9730_private *lp)
{
/* Initialize DMA control register. */
- outl((LANMB_ANY << DMA_CTL_MAX_XFER_SHF) |
- (LANEND_LITTLE << DMA_CTL_ENDIAN_SHF) |
- (LAN_SAA9730_RCV_Q_INT_THRESHOLD << DMA_CTL_RX_INT_COUNT_SHF)
- | DMA_CTL_RX_INT_TO_EN | DMA_CTL_RX_INT_EN |
- DMA_CTL_MAC_RX_INT_EN | DMA_CTL_MAC_TX_INT_EN,
- &lp->lan_saa9730_regs->LanDmaCtl);
+ writel((LANMB_ANY << DMA_CTL_MAX_XFER_SHF) |
+ (LANEND_LITTLE << DMA_CTL_ENDIAN_SHF) |
+ (LAN_SAA9730_RCV_Q_INT_THRESHOLD << DMA_CTL_RX_INT_COUNT_SHF)
+ | DMA_CTL_RX_INT_TO_EN | DMA_CTL_RX_INT_EN |
+ DMA_CTL_MAC_RX_INT_EN | DMA_CTL_MAC_TX_INT_EN,
+ &lp->lan_saa9730_regs->LanDmaCtl);
/* Initial MAC control register. */
- outl((MACCM_MII << MAC_CONTROL_CONN_SHF) | MAC_CONTROL_FULL_DUP,
- &lp->lan_saa9730_regs->MacCtl);
+ writel((MACCM_MII << MAC_CONTROL_CONN_SHF) | MAC_CONTROL_FULL_DUP,
+ &lp->lan_saa9730_regs->MacCtl);
/* Initialize CAM control register. */
- outl(CAM_CONTROL_COMP_EN | CAM_CONTROL_BROAD_ACC,
- &lp->lan_saa9730_regs->CamCtl);
+ writel(CAM_CONTROL_COMP_EN | CAM_CONTROL_BROAD_ACC,
+ &lp->lan_saa9730_regs->CamCtl);
/*
* Initialize CAM enable register, only turn on first entry, should
* contain own addr.
*/
- outl(0x0001, &lp->lan_saa9730_regs->CamEnable);
+ writel(0x0001, &lp->lan_saa9730_regs->CamEnable);
/* Initialize Tx control register */
- outl(TX_CTL_EN_COMP, &lp->lan_saa9730_regs->TxCtl);
+ writel(TX_CTL_EN_COMP, &lp->lan_saa9730_regs->TxCtl);
/* Initialize Rcv control register */
- outl(RX_CTL_STRIP_CRC, &lp->lan_saa9730_regs->RxCtl);
+ writel(RX_CTL_STRIP_CRC, &lp->lan_saa9730_regs->RxCtl);
/* Reset DMA engine */
- outl(DMA_TEST_SW_RESET, &lp->lan_saa9730_regs->DmaTest);
+ writel(DMA_TEST_SW_RESET, &lp->lan_saa9730_regs->DmaTest);
return 0;
}
int i;
/* Stop DMA first */
- outl(readl(&lp->lan_saa9730_regs->LanDmaCtl) &
- ~(DMA_CTL_EN_TX_DMA | DMA_CTL_EN_RX_DMA),
- &lp->lan_saa9730_regs->LanDmaCtl);
+ writel(readl(&lp->lan_saa9730_regs->LanDmaCtl) &
+ ~(DMA_CTL_EN_TX_DMA | DMA_CTL_EN_RX_DMA),
+ &lp->lan_saa9730_regs->LanDmaCtl);
/* Set the SW Reset bits in DMA and MAC control registers */
- outl(DMA_TEST_SW_RESET, &lp->lan_saa9730_regs->DmaTest);
- outl(readl(&lp->lan_saa9730_regs->MacCtl) | MAC_CONTROL_RESET,
- &lp->lan_saa9730_regs->MacCtl);
+ writel(DMA_TEST_SW_RESET, &lp->lan_saa9730_regs->DmaTest);
+ writel(readl(&lp->lan_saa9730_regs->MacCtl) | MAC_CONTROL_RESET,
+ &lp->lan_saa9730_regs->MacCtl);
/*
* Wait for MAC reset to have finished. The reset bit is auto cleared
/* Stop lan controller. */
lan_saa9730_stop(lp);
- outl(LAN_SAA9730_DEFAULT_TIME_OUT_CNT,
- &lp->lan_saa9730_regs->Timeout);
+ writel(LAN_SAA9730_DEFAULT_TIME_OUT_CNT,
+ &lp->lan_saa9730_regs->Timeout);
return 0;
}
lp->PendingTxmPacketIndex = 0;
lp->PendingTxmBufferIndex = 0;
- outl(readl(&lp->lan_saa9730_regs->LanDmaCtl) | DMA_CTL_EN_TX_DMA |
- DMA_CTL_EN_RX_DMA, &lp->lan_saa9730_regs->LanDmaCtl);
+ writel(readl(&lp->lan_saa9730_regs->LanDmaCtl) | DMA_CTL_EN_TX_DMA |
+ DMA_CTL_EN_RX_DMA, &lp->lan_saa9730_regs->LanDmaCtl);
/* For Tx, turn on MAC then DMA */
- outl(readl(&lp->lan_saa9730_regs->TxCtl) | TX_CTL_TX_EN,
- &lp->lan_saa9730_regs->TxCtl);
+ writel(readl(&lp->lan_saa9730_regs->TxCtl) | TX_CTL_TX_EN,
+ &lp->lan_saa9730_regs->TxCtl);
/* For Rx, turn on DMA then MAC */
- outl(readl(&lp->lan_saa9730_regs->RxCtl) | RX_CTL_RX_EN,
- &lp->lan_saa9730_regs->RxCtl);
+ writel(readl(&lp->lan_saa9730_regs->RxCtl) | RX_CTL_RX_EN,
+ &lp->lan_saa9730_regs->RxCtl);
/* Set Ok2Use to let hardware own the buffers. */
- outl(OK2USE_RX_A | OK2USE_RX_B, &lp->lan_saa9730_regs->Ok2Use);
+ writel(OK2USE_RX_A | OK2USE_RX_B, &lp->lan_saa9730_regs->Ok2Use);
return 0;
}
printk("lan_saa9730_tx interrupt\n");
/* Clear interrupt. */
- outl(DMA_STATUS_MAC_TX_INT, &lp->lan_saa9730_regs->DmaStatus);
+ writel(DMA_STATUS_MAC_TX_INT, &lp->lan_saa9730_regs->DmaStatus);
while (1) {
pPacket = lp->TxmBuffer[lp->PendingTxmBufferIndex]
printk("lan_saa9730_rx interrupt\n");
/* Clear receive interrupts. */
- outl(DMA_STATUS_MAC_RX_INT | DMA_STATUS_RX_INT |
- DMA_STATUS_RX_TO_INT, &lp->lan_saa9730_regs->DmaStatus);
+ writel(DMA_STATUS_MAC_RX_INT | DMA_STATUS_RX_INT |
+ DMA_STATUS_RX_TO_INT, &lp->lan_saa9730_regs->DmaStatus);
/* Address next packet */
BufferIndex = lp->NextRcvBufferIndex;
*pPacket = cpu_to_le32(RXSF_READY << RX_STAT_CTL_OWNER_SHF);
/* Make sure A or B is available to hardware as appropriate. */
- outl(BufferIndex ? OK2USE_RX_B : OK2USE_RX_A,
- &lp->lan_saa9730_regs->Ok2Use);
+ writel(BufferIndex ? OK2USE_RX_B : OK2USE_RX_A,
+ &lp->lan_saa9730_regs->Ok2Use);
/* Go to next packet in sequence. */
lp->NextRcvPacketIndex++;
(len << TX_STAT_CTL_LENGTH_SHF));
/* Make sure A or B is available to hardware as appropriate. */
- outl(BufferIndex ? OK2USE_TX_B : OK2USE_TX_A,
- &lp->lan_saa9730_regs->Ok2Use);
+ writel(BufferIndex ? OK2USE_TX_B : OK2USE_TX_A,
+ &lp->lan_saa9730_regs->Ok2Use);
return 0;
}
if (dev->flags & IFF_PROMISC) {
/* accept all packets */
- outl(CAM_CONTROL_COMP_EN | CAM_CONTROL_STATION_ACC |
- CAM_CONTROL_GROUP_ACC | CAM_CONTROL_BROAD_ACC,
- &lp->lan_saa9730_regs->CamCtl);
+ writel(CAM_CONTROL_COMP_EN | CAM_CONTROL_STATION_ACC |
+ CAM_CONTROL_GROUP_ACC | CAM_CONTROL_BROAD_ACC,
+ &lp->lan_saa9730_regs->CamCtl);
} else {
if (dev->flags & IFF_ALLMULTI) {
/* accept all multicast packets */
- outl(CAM_CONTROL_COMP_EN | CAM_CONTROL_GROUP_ACC |
- CAM_CONTROL_BROAD_ACC,
- &lp->lan_saa9730_regs->CamCtl);
+ writel(CAM_CONTROL_COMP_EN | CAM_CONTROL_GROUP_ACC |
+ CAM_CONTROL_BROAD_ACC,
+ &lp->lan_saa9730_regs->CamCtl);
} else {
/*
* Will handle the multicast stuff later. -carstenl
* Controller-specific things
*/
- volatile void __iomem *sbm_base; /* MAC's base address */
+ void __iomem *sbm_base; /* MAC's base address */
sbmac_state_t sbm_state; /* current state */
volatile void __iomem *sbm_macenable; /* MAC Enable Register */
goto out;
}
- spin_lock_bh(&priv->lock);
+ spin_lock(&priv->lock);
if (unlikely(!netif_carrier_ok(dev))) {
err = -ENOLINK;
netif_stop_queue(dev);
out_unlock:
- spin_unlock_bh(&priv->lock);
+ spin_unlock(&priv->lock);
out:
dev_kfree_skb(skb);
priv->pm_config = 0;
/* Interrupts already disabled by sc92031_stop or sc92031_probe */
- spin_lock(&priv->lock);
+ spin_lock_bh(&priv->lock);
_sc92031_reset(dev);
mmiowb();
- spin_unlock(&priv->lock);
+ spin_unlock_bh(&priv->lock);
sc92031_enable_interrupts(dev);
if (netif_carrier_ok(dev))
/* Disable interrupts, stop Tx and Rx. */
sc92031_disable_interrupts(dev);
- spin_lock(&priv->lock);
+ spin_lock_bh(&priv->lock);
_sc92031_disable_tx_rx(dev);
_sc92031_tx_clear(dev);
mmiowb();
- spin_unlock(&priv->lock);
+ spin_unlock_bh(&priv->lock);
free_irq(pdev->irq, dev);
pci_free_consistent(pdev, TX_BUF_TOT_LEN, priv->tx_bufs,
/* Disable interrupts, stop Tx and Rx. */
sc92031_disable_interrupts(dev);
- spin_lock(&priv->lock);
+ spin_lock_bh(&priv->lock);
_sc92031_disable_tx_rx(dev);
_sc92031_tx_clear(dev);
mmiowb();
- spin_unlock(&priv->lock);
+ spin_unlock_bh(&priv->lock);
out:
pci_set_power_state(pdev, pci_choose_state(pdev, state));
goto out;
/* Interrupts already disabled by sc92031_suspend */
- spin_lock(&priv->lock);
+ spin_lock_bh(&priv->lock);
_sc92031_reset(dev);
mmiowb();
- spin_unlock(&priv->lock);
+ spin_unlock_bh(&priv->lock);
sc92031_enable_interrupts(dev);
netif_device_attach(dev);
u32 feature;
} mii_chip_table[] = {
{ "Broadcom PHY BCM5461", { 0x0020, 0x60c0 }, LAN, F_PHY_BCM5461 },
+ { "Broadcom PHY AC131", { 0x0143, 0xbc70 }, LAN, 0 },
{ "Agere PHY ET1101B", { 0x0282, 0xf010 }, LAN, 0 },
{ "Marvell PHY 88E1111", { 0x0141, 0x0cc0 }, LAN, F_PHY_88E1111 },
{ "Realtek PHY RTL8201", { 0x0000, 0x8200 }, LAN, 0 },
} else {
struct sk_buff * skb;
+ pci_unmap_single(sis_priv->pci_dev,
+ sis_priv->rx_ring[entry].bufptr, RX_BUF_SIZE,
+ PCI_DMA_FROMDEVICE);
+
+ /* refill the Rx buffer, what if there is not enought
+ * memory for new socket buffer ?? */
+ if ((skb = dev_alloc_skb(RX_BUF_SIZE)) == NULL) {
+ /*
+ * Not enough memory to refill the buffer
+ * so we need to recycle the old one so
+ * as to avoid creating a memory hole
+ * in the rx ring
+ */
+ skb = sis_priv->rx_skbuff[entry];
+ sis_priv->stats.rx_dropped++;
+ goto refill_rx_ring;
+ }
+
/* This situation should never happen, but due to
some unknow bugs, it is possible that
we are working on NULL sk_buff :-( */
break;
}
- pci_unmap_single(sis_priv->pci_dev,
- sis_priv->rx_ring[entry].bufptr, RX_BUF_SIZE,
- PCI_DMA_FROMDEVICE);
/* give the socket buffer to upper layers */
skb = sis_priv->rx_skbuff[entry];
skb_put(skb, rx_size);
net_dev->last_rx = jiffies;
sis_priv->stats.rx_bytes += rx_size;
sis_priv->stats.rx_packets++;
-
- /* refill the Rx buffer, what if there is not enought
- * memory for new socket buffer ?? */
- if ((skb = dev_alloc_skb(RX_BUF_SIZE)) == NULL) {
- /* not enough memory for skbuff, this makes a
- * "hole" on the buffer ring, it is not clear
- * how the hardware will react to this kind
- * of degenerated buffer */
- if (netif_msg_rx_status(sis_priv))
- printk(KERN_INFO "%s: Memory squeeze,"
- "deferring packet.\n",
- net_dev->name);
- sis_priv->rx_skbuff[entry] = NULL;
- /* reset buffer descriptor state */
- sis_priv->rx_ring[entry].cmdsts = 0;
- sis_priv->rx_ring[entry].bufptr = 0;
- sis_priv->stats.rx_dropped++;
- sis_priv->cur_rx++;
- break;
- }
+ sis_priv->dirty_rx++;
+refill_rx_ring:
skb->dev = net_dev;
sis_priv->rx_skbuff[entry] = skb;
sis_priv->rx_ring[entry].cmdsts = RX_BUF_SIZE;
sis_priv->rx_ring[entry].bufptr =
pci_map_single(sis_priv->pci_dev, skb->data,
RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
- sis_priv->dirty_rx++;
}
sis_priv->cur_rx++;
entry = sis_priv->cur_rx % NUM_RX_DESC;
static const int rxqaddr[] = { Q_R1, Q_R2 };
static const u32 rxirqmask[] = { IS_R1_F, IS_R2_F };
static const u32 txirqmask[] = { IS_XA1_F, IS_XA2_F };
-static const u32 irqmask[] = { IS_R1_F|IS_XA1_F, IS_R2_F|IS_XA2_F };
+static const u32 napimask[] = { IS_R1_F|IS_XA1_F, IS_R2_F|IS_XA2_F };
+static const u32 portmask[] = { IS_PORT_1, IS_PORT_2 };
static int skge_get_regs_len(struct net_device *dev)
{
{
struct skge_hw *hw = skge->hw;
int port = skge->port;
- enum pause_control save_mode;
- u32 ctrl;
+ u16 ctrl;
- /* Bring hardware out of reset */
skge_write16(hw, B0_CTST, CS_RST_CLR);
skge_write16(hw, SK_REG(port, GMAC_LINK_CTRL), GMLC_RST_CLR);
- skge_write8(hw, SK_REG(port, GPHY_CTRL), GPC_RST_CLR);
- skge_write8(hw, SK_REG(port, GMAC_CTRL), GMC_RST_CLR);
+ /* Turn on Vaux */
+ skge_write8(hw, B0_POWER_CTRL,
+ PC_VAUX_ENA | PC_VCC_ENA | PC_VAUX_ON | PC_VCC_OFF);
- /* Force to 10/100 skge_reset will re-enable on resume */
- save_mode = skge->flow_control;
- skge->flow_control = FLOW_MODE_SYMMETRIC;
+ /* WA code for COMA mode -- clear PHY reset */
+ if (hw->chip_id == CHIP_ID_YUKON_LITE &&
+ hw->chip_rev >= CHIP_REV_YU_LITE_A3) {
+ u32 reg = skge_read32(hw, B2_GP_IO);
+ reg |= GP_DIR_9;
+ reg &= ~GP_IO_9;
+ skge_write32(hw, B2_GP_IO, reg);
+ }
- ctrl = skge->advertising;
- skge->advertising &= ~(ADVERTISED_1000baseT_Half|ADVERTISED_1000baseT_Full);
+ skge_write32(hw, SK_REG(port, GPHY_CTRL),
+ GPC_DIS_SLEEP |
+ GPC_HWCFG_M_3 | GPC_HWCFG_M_2 | GPC_HWCFG_M_1 | GPC_HWCFG_M_0 |
+ GPC_ANEG_1 | GPC_RST_SET);
- skge_phy_reset(skge);
+ skge_write32(hw, SK_REG(port, GPHY_CTRL),
+ GPC_DIS_SLEEP |
+ GPC_HWCFG_M_3 | GPC_HWCFG_M_2 | GPC_HWCFG_M_1 | GPC_HWCFG_M_0 |
+ GPC_ANEG_1 | GPC_RST_CLR);
+
+ skge_write32(hw, SK_REG(port, GMAC_CTRL), GMC_RST_CLR);
+
+ /* Force to 10/100 skge_reset will re-enable on resume */
+ gm_phy_write(hw, port, PHY_MARV_AUNE_ADV,
+ PHY_AN_100FULL | PHY_AN_100HALF |
+ PHY_AN_10FULL | PHY_AN_10HALF| PHY_AN_CSMA);
+ /* no 1000 HD/FD */
+ gm_phy_write(hw, port, PHY_MARV_1000T_CTRL, 0);
+ gm_phy_write(hw, port, PHY_MARV_CTRL,
+ PHY_CT_RESET | PHY_CT_SPS_LSB | PHY_CT_ANE |
+ PHY_CT_RE_CFG | PHY_CT_DUP_MD);
- skge->flow_control = save_mode;
- skge->advertising = ctrl;
/* Set GMAC to no flow control and auto update for speed/duplex */
gma_write16(hw, port, GM_GP_CTRL,
struct skge_port *skge = netdev_priv(dev);
struct skge_hw *hw = skge->hw;
- if (wol->wolopts & wol_supported(hw))
+ if (wol->wolopts & ~wol_supported(hw))
return -EOPNOTSUPP;
skge->wol = wol->wolopts;
- if (!netif_running(dev))
- skge_wol_init(skge);
return 0;
}
struct skge_hw *hw = skge->hw;
int port = skge->port;
- mutex_lock(&hw->phy_mutex);
+ spin_lock_bh(&hw->phy_lock);
if (hw->chip_id == CHIP_ID_GENESIS) {
switch (mode) {
case LED_MODE_OFF:
PHY_M_LED_MO_RX(MO_LED_ON));
}
}
- mutex_unlock(&hw->phy_mutex);
+ spin_unlock_bh(&hw->phy_lock);
}
/* blink LED's for finding board */
xm_phy_write(hw, port, PHY_XMAC_CTRL, ctrl);
/* Poll PHY for status changes */
- schedule_delayed_work(&skge->link_thread, LINK_HZ);
+ mod_timer(&skge->link_timer, jiffies + LINK_HZ);
}
static void xm_check_link(struct net_device *dev)
* Since internal PHY is wired to a level triggered pin, can't
* get an interrupt when carrier is detected.
*/
-static void xm_link_timer(struct work_struct *work)
+static void xm_link_timer(unsigned long arg)
{
- struct skge_port *skge =
- container_of(work, struct skge_port, link_thread.work);
+ struct skge_port *skge = (struct skge_port *) arg;
struct net_device *dev = skge->netdev;
struct skge_hw *hw = skge->hw;
int port = skge->port;
goto nochange;
}
- mutex_lock(&hw->phy_mutex);
+ spin_lock(&hw->phy_lock);
xm_check_link(dev);
- mutex_unlock(&hw->phy_mutex);
+ spin_unlock(&hw->phy_lock);
nochange:
if (netif_running(dev))
- schedule_delayed_work(&skge->link_thread, LINK_HZ);
+ mod_timer(&skge->link_timer, jiffies + LINK_HZ);
}
static void genesis_mac_init(struct skge_hw *hw, int port)
netif_stop_queue(skge->netdev);
netif_carrier_off(skge->netdev);
- mutex_lock(&hw->phy_mutex);
+ spin_lock_bh(&hw->phy_lock);
if (hw->chip_id == CHIP_ID_GENESIS) {
genesis_reset(hw, port);
genesis_mac_init(hw, port);
yukon_reset(hw, port);
yukon_init(hw, port);
}
- mutex_unlock(&hw->phy_mutex);
+ spin_unlock_bh(&hw->phy_lock);
dev->set_multicast_list(dev);
}
/* fallthru */
case SIOCGMIIREG: {
u16 val = 0;
- mutex_lock(&hw->phy_mutex);
+ spin_lock_bh(&hw->phy_lock);
if (hw->chip_id == CHIP_ID_GENESIS)
err = __xm_phy_read(hw, skge->port, data->reg_num & 0x1f, &val);
else
err = __gm_phy_read(hw, skge->port, data->reg_num & 0x1f, &val);
- mutex_unlock(&hw->phy_mutex);
+ spin_unlock_bh(&hw->phy_lock);
data->val_out = val;
break;
}
if (!capable(CAP_NET_ADMIN))
return -EPERM;
- mutex_lock(&hw->phy_mutex);
+ spin_lock_bh(&hw->phy_lock);
if (hw->chip_id == CHIP_ID_GENESIS)
err = xm_phy_write(hw, skge->port, data->reg_num & 0x1f,
data->val_in);
else
err = gm_phy_write(hw, skge->port, data->reg_num & 0x1f,
data->val_in);
- mutex_unlock(&hw->phy_mutex);
+ spin_unlock_bh(&hw->phy_lock);
break;
}
return err;
goto free_rx_ring;
/* Initialize MAC */
- mutex_lock(&hw->phy_mutex);
+ spin_lock_bh(&hw->phy_lock);
if (hw->chip_id == CHIP_ID_GENESIS)
genesis_mac_init(hw, port);
else
yukon_mac_init(hw, port);
- mutex_unlock(&hw->phy_mutex);
+ spin_unlock_bh(&hw->phy_lock);
/* Configure RAMbuffers */
chunk = hw->ram_size / ((hw->ports + 1)*2);
skge_write8(hw, Q_ADDR(rxqaddr[port], Q_CSR), CSR_START | CSR_IRQ_CL_F);
skge_led(skge, LED_MODE_ON);
+ spin_lock_irq(&hw->hw_lock);
+ hw->intr_mask |= portmask[port];
+ skge_write32(hw, B0_IMSK, hw->intr_mask);
+ spin_unlock_irq(&hw->hw_lock);
+
netif_poll_enable(dev);
return 0;
printk(KERN_INFO PFX "%s: disabling interface\n", dev->name);
netif_stop_queue(dev);
+
if (hw->chip_id == CHIP_ID_GENESIS && hw->phy_type == SK_PHY_XMAC)
- cancel_delayed_work(&skge->link_thread);
+ del_timer_sync(&skge->link_timer);
+
+ netif_poll_disable(dev);
+ netif_carrier_off(dev);
+
+ spin_lock_irq(&hw->hw_lock);
+ hw->intr_mask &= ~portmask[port];
+ skge_write32(hw, B0_IMSK, hw->intr_mask);
+ spin_unlock_irq(&hw->hw_lock);
skge_write8(skge->hw, SK_REG(skge->port, LNK_LED_REG), LED_OFF);
if (hw->chip_id == CHIP_ID_GENESIS)
skge_led(skge, LED_MODE_OFF);
- netif_poll_disable(dev);
+ netif_tx_lock_bh(dev);
skge_tx_clean(dev);
+ netif_tx_unlock_bh(dev);
+
skge_rx_clean(skge);
kfree(skge->rx_ring.start);
struct skge_port *skge = netdev_priv(dev);
struct skge_element *e;
- netif_tx_lock_bh(dev);
for (e = skge->tx_ring.to_clean; e != skge->tx_ring.to_use; e = e->next) {
struct skge_tx_desc *td = e->desc;
skge_tx_free(skge, e, td->control);
skge->tx_ring.to_clean = e;
netif_wake_queue(dev);
- netif_tx_unlock_bh(dev);
}
static void skge_tx_timeout(struct net_device *dev)
spin_lock_irqsave(&hw->hw_lock, flags);
__netif_rx_complete(dev);
- hw->intr_mask |= irqmask[skge->port];
+ hw->intr_mask |= napimask[skge->port];
skge_write32(hw, B0_IMSK, hw->intr_mask);
skge_read32(hw, B0_IMSK);
spin_unlock_irqrestore(&hw->hw_lock, flags);
}
/*
- * Interrupt from PHY are handled in work queue
+ * Interrupt from PHY are handled in tasklet (softirq)
* because accessing phy registers requires spin wait which might
* cause excess interrupt latency.
*/
-static void skge_extirq(struct work_struct *work)
+static void skge_extirq(unsigned long arg)
{
- struct skge_hw *hw = container_of(work, struct skge_hw, phy_work);
+ struct skge_hw *hw = (struct skge_hw *) arg;
int port;
- mutex_lock(&hw->phy_mutex);
for (port = 0; port < hw->ports; port++) {
struct net_device *dev = hw->dev[port];
- struct skge_port *skge = netdev_priv(dev);
if (netif_running(dev)) {
+ struct skge_port *skge = netdev_priv(dev);
+
+ spin_lock(&hw->phy_lock);
if (hw->chip_id != CHIP_ID_GENESIS)
yukon_phy_intr(skge);
else if (hw->phy_type == SK_PHY_BCOM)
bcom_phy_intr(skge);
+ spin_unlock(&hw->phy_lock);
}
}
- mutex_unlock(&hw->phy_mutex);
spin_lock_irq(&hw->hw_lock);
hw->intr_mask |= IS_EXT_REG;
status &= hw->intr_mask;
if (status & IS_EXT_REG) {
hw->intr_mask &= ~IS_EXT_REG;
- schedule_work(&hw->phy_work);
+ tasklet_schedule(&hw->phy_task);
}
if (status & (IS_XA1_F|IS_R1_F)) {
struct skge_hw *hw = skge->hw;
unsigned port = skge->port;
const struct sockaddr *addr = p;
+ u16 ctrl;
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
- mutex_lock(&hw->phy_mutex);
memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN);
- memcpy_toio(hw->regs + B2_MAC_1 + port*8,
- dev->dev_addr, ETH_ALEN);
- memcpy_toio(hw->regs + B2_MAC_2 + port*8,
- dev->dev_addr, ETH_ALEN);
- if (hw->chip_id == CHIP_ID_GENESIS)
- xm_outaddr(hw, port, XM_SA, dev->dev_addr);
- else {
- gma_set_addr(hw, port, GM_SRC_ADDR_1L, dev->dev_addr);
- gma_set_addr(hw, port, GM_SRC_ADDR_2L, dev->dev_addr);
+ if (!netif_running(dev)) {
+ memcpy_toio(hw->regs + B2_MAC_1 + port*8, dev->dev_addr, ETH_ALEN);
+ memcpy_toio(hw->regs + B2_MAC_2 + port*8, dev->dev_addr, ETH_ALEN);
+ } else {
+ /* disable Rx */
+ spin_lock_bh(&hw->phy_lock);
+ ctrl = gma_read16(hw, port, GM_GP_CTRL);
+ gma_write16(hw, port, GM_GP_CTRL, ctrl & ~GM_GPCR_RX_ENA);
+
+ memcpy_toio(hw->regs + B2_MAC_1 + port*8, dev->dev_addr, ETH_ALEN);
+ memcpy_toio(hw->regs + B2_MAC_2 + port*8, dev->dev_addr, ETH_ALEN);
+
+ if (hw->chip_id == CHIP_ID_GENESIS)
+ xm_outaddr(hw, port, XM_SA, dev->dev_addr);
+ else {
+ gma_set_addr(hw, port, GM_SRC_ADDR_1L, dev->dev_addr);
+ gma_set_addr(hw, port, GM_SRC_ADDR_2L, dev->dev_addr);
+ }
+
+ gma_write16(hw, port, GM_GP_CTRL, ctrl);
+ spin_unlock_bh(&hw->phy_lock);
}
- mutex_unlock(&hw->phy_mutex);
return 0;
}
else
hw->ram_size = t8 * 4096;
- hw->intr_mask = IS_HW_ERR | IS_PORT_1;
- if (hw->ports > 1)
- hw->intr_mask |= IS_PORT_2;
+ hw->intr_mask = IS_HW_ERR;
+ /* Use PHY IRQ for all but fiber based Genesis board */
if (!(hw->chip_id == CHIP_ID_GENESIS && hw->phy_type == SK_PHY_XMAC))
hw->intr_mask |= IS_EXT_REG;
skge_write32(hw, B0_IMSK, hw->intr_mask);
- mutex_lock(&hw->phy_mutex);
for (i = 0; i < hw->ports; i++) {
if (hw->chip_id == CHIP_ID_GENESIS)
genesis_reset(hw, i);
else
yukon_reset(hw, i);
}
- mutex_unlock(&hw->phy_mutex);
return 0;
}
skge->netdev = dev;
skge->hw = hw;
skge->msg_enable = netif_msg_init(debug, default_msg);
+
skge->tx_ring.count = DEFAULT_TX_RING_SIZE;
skge->rx_ring.count = DEFAULT_RX_RING_SIZE;
skge->port = port;
/* Only used for Genesis XMAC */
- INIT_DELAYED_WORK(&skge->link_thread, xm_link_timer);
+ setup_timer(&skge->link_timer, xm_link_timer, (unsigned long) skge);
if (hw->chip_id != CHIP_ID_GENESIS) {
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
}
hw->pdev = pdev;
- mutex_init(&hw->phy_mutex);
- INIT_WORK(&hw->phy_work, skge_extirq);
spin_lock_init(&hw->hw_lock);
+ spin_lock_init(&hw->phy_lock);
+ tasklet_init(&hw->phy_task, &skge_extirq, (unsigned long) hw);
hw->regs = ioremap_nocache(pci_resource_start(pdev, 0), 0x4000);
if (!hw->regs) {
dev0 = hw->dev[0];
unregister_netdev(dev0);
+ tasklet_disable(&hw->phy_task);
+
spin_lock_irq(&hw->hw_lock);
hw->intr_mask = 0;
skge_write32(hw, B0_IMSK, 0);
}
#ifdef CONFIG_PM
-static int vaux_avail(struct pci_dev *pdev)
-{
- int pm_cap;
-
- pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM);
- if (pm_cap) {
- u16 ctl;
- pci_read_config_word(pdev, pm_cap + PCI_PM_PMC, &ctl);
- if (ctl & PCI_PM_CAP_AUX_POWER)
- return 1;
- }
- return 0;
-}
-
-
static int skge_suspend(struct pci_dev *pdev, pm_message_t state)
{
struct skge_hw *hw = pci_get_drvdata(pdev);
wol |= skge->wol;
}
- if (wol && vaux_avail(pdev))
- skge_write8(hw, B0_POWER_CTRL,
- PC_VAUX_ENA | PC_VCC_ENA | PC_VAUX_ON | PC_VCC_OFF);
-
skge_write32(hw, B0_IMSK, 0);
pci_enable_wake(pdev, pci_choose_state(pdev, state), wol);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
}
#endif
+static void skge_shutdown(struct pci_dev *pdev)
+{
+ struct skge_hw *hw = pci_get_drvdata(pdev);
+ int i, wol = 0;
+
+ for (i = 0; i < hw->ports; i++) {
+ struct net_device *dev = hw->dev[i];
+ struct skge_port *skge = netdev_priv(dev);
+
+ if (skge->wol)
+ skge_wol_init(skge);
+ wol |= skge->wol;
+ }
+
+ pci_enable_wake(pdev, PCI_D3hot, wol);
+ pci_enable_wake(pdev, PCI_D3cold, wol);
+
+ pci_disable_device(pdev);
+ pci_set_power_state(pdev, PCI_D3hot);
+
+}
+
static struct pci_driver skge_driver = {
.name = DRV_NAME,
.id_table = skge_id_table,
.suspend = skge_suspend,
.resume = skge_resume,
#endif
+ .shutdown = skge_shutdown,
};
static int __init skge_init_module(void)
u32 ram_size;
u32 ram_offset;
u16 phy_addr;
- struct work_struct phy_work;
- struct mutex phy_mutex;
+ spinlock_t phy_lock;
+ struct tasklet_struct phy_task;
};
enum pause_control {
struct net_device_stats net_stats;
- struct delayed_work link_thread;
+ struct timer_list link_timer;
enum pause_control flow_control;
enum pause_status flow_status;
u8 rx_csum;
#include "sky2.h"
#define DRV_NAME "sky2"
-#define DRV_VERSION "1.13"
+#define DRV_VERSION "1.14"
#define PFX DRV_NAME " "
/*
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4361) }, /* 88E8050 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4362) }, /* 88E8053 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4363) }, /* 88E8055 */
+#ifdef broken
+ /* This device causes data corruption problems that are not resolved */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4364) }, /* 88E8056 */
+#endif
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4366) }, /* 88EC036 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4367) }, /* 88EC032 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4368) }, /* 88EC034 */
ledover &= ~PHY_M_LED_MO_RX;
}
- if (hw->chip_id == CHIP_ID_YUKON_EC_U && hw->chip_rev == CHIP_REV_YU_EC_A1) {
+ if (hw->chip_id == CHIP_ID_YUKON_EC_U &&
+ hw->chip_rev == CHIP_REV_YU_EC_U_A1) {
/* apply fixes in PHY AFE */
- pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);
gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 255);
/* increase differential signal amplitude in 10BASE-T */
gm_phy_write(hw, port, 0x17, 0x2002);
/* set page register to 0 */
- gm_phy_write(hw, port, PHY_MARV_EXT_ADR, pg);
+ gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 0);
} else if (hw->chip_id != CHIP_ID_YUKON_EX) {
gm_phy_write(hw, port, PHY_MARV_LED_CTRL, ledctrl);
if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) {
sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 768/8);
sky2_write8(hw, SK_REG(port, RX_GMF_UP_THR), 1024/8);
- if (hw->dev[port]->mtu > ETH_DATA_LEN) {
- /* set Tx GMAC FIFO Almost Empty Threshold */
- sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR), 0x180);
- /* Disable Store & Forward mode for TX */
- sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), TX_STFW_DIS);
- }
+
+ /* set Tx GMAC FIFO Almost Empty Threshold */
+ sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR),
+ (ECU_JUMBO_WM << 16) | ECU_AE_THR);
+
+ if (hw->dev[port]->mtu > ETH_DATA_LEN)
+ sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),
+ TX_JUMBO_ENA | TX_STFW_DIS);
+ else
+ sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),
+ TX_JUMBO_DIS | TX_STFW_ENA);
}
}
/* Set almost empty threshold */
if (hw->chip_id == CHIP_ID_YUKON_EC_U
&& hw->chip_rev == CHIP_REV_YU_EC_U_A0)
- sky2_write16(hw, Q_ADDR(txqaddr[port], Q_AL), 0x1a0);
+ sky2_write16(hw, Q_ADDR(txqaddr[port], Q_AL), ECU_TXFF_LEV);
sky2_prefetch_init(hw, txqaddr[port], sky2->tx_le_map,
TX_RING_SIZE - 1);
/* Stop more packets from being queued */
netif_stop_queue(dev);
+ netif_carrier_off(dev);
/* Disable port IRQ */
imask = sky2_read32(hw, B0_IMSK);
sky2_write32(hw, RB_ADDR(txqaddr[port], RB_CTRL),
RB_RST_SET | RB_DIS_OP_MD);
- /* WA for dev. #4.209 */
- if (hw->chip_id == CHIP_ID_YUKON_EC_U
- && (hw->chip_rev == CHIP_REV_YU_EC_U_A1 || hw->chip_rev == CHIP_REV_YU_EC_U_B0))
- sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),
- sky2->speed != SPEED_1000 ?
- TX_STFW_ENA : TX_STFW_DIS);
-
ctrl = gma_read16(hw, port, GM_GP_CTRL);
ctrl &= ~(GM_GPCR_TX_ENA | GM_GPCR_RX_ENA);
gma_write16(hw, port, GM_GP_CTRL, ctrl);
{
struct sky2_port *sky2 = netdev_priv(dev);
struct sky2_hw *hw = sky2->hw;
+ unsigned port = sky2->port;
int err;
u16 ctl, mode;
u32 imask;
if (new_mtu < ETH_ZLEN || new_mtu > ETH_JUMBO_MTU)
return -EINVAL;
- /* TSO on Yukon Ultra and MTU > 1500 not supported */
- if (hw->chip_id == CHIP_ID_YUKON_EC_U && new_mtu > ETH_DATA_LEN)
- dev->features &= ~NETIF_F_TSO;
+ if (new_mtu > ETH_DATA_LEN && hw->chip_id == CHIP_ID_YUKON_FE)
+ return -EINVAL;
if (!netif_running(dev)) {
dev->mtu = new_mtu;
synchronize_irq(hw->pdev->irq);
- ctl = gma_read16(hw, sky2->port, GM_GP_CTRL);
- gma_write16(hw, sky2->port, GM_GP_CTRL, ctl & ~GM_GPCR_RX_ENA);
+ if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) {
+ if (new_mtu > ETH_DATA_LEN) {
+ sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),
+ TX_JUMBO_ENA | TX_STFW_DIS);
+ dev->features &= NETIF_F_TSO | NETIF_F_SG | NETIF_F_IP_CSUM;
+ } else
+ sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),
+ TX_JUMBO_DIS | TX_STFW_ENA);
+ }
+
+ ctl = gma_read16(hw, port, GM_GP_CTRL);
+ gma_write16(hw, port, GM_GP_CTRL, ctl & ~GM_GPCR_RX_ENA);
sky2_rx_stop(sky2);
sky2_rx_clean(sky2);
if (dev->mtu > ETH_DATA_LEN)
mode |= GM_SMOD_JUMBO_ENA;
- gma_write16(hw, sky2->port, GM_SERIAL_MODE, mode);
+ gma_write16(hw, port, GM_SERIAL_MODE, mode);
- sky2_write8(hw, RB_ADDR(rxqaddr[sky2->port], RB_CTRL), RB_ENA_OP_MD);
+ sky2_write8(hw, RB_ADDR(rxqaddr[port], RB_CTRL), RB_ENA_OP_MD);
err = sky2_rx_start(sky2);
sky2_write32(hw, B0_IMSK, imask);
if (err)
dev_close(dev);
else {
- gma_write16(hw, sky2->port, GM_GP_CTRL, ctl);
+ gma_write16(hw, port, GM_GP_CTRL, ctl);
netif_poll_enable(hw->dev[0]);
netif_wake_queue(dev);
}
}
-/* This should never happen it is a fatal situation */
-static void sky2_descriptor_error(struct sky2_hw *hw, unsigned port,
- const char *rxtx, u32 mask)
+/* This should never happen it is a bug. */
+static void sky2_le_error(struct sky2_hw *hw, unsigned port,
+ u16 q, unsigned ring_size)
{
struct net_device *dev = hw->dev[port];
struct sky2_port *sky2 = netdev_priv(dev);
- u32 imask;
-
- printk(KERN_ERR PFX "%s: %s descriptor error (hardware problem)\n",
- dev ? dev->name : "<not registered>", rxtx);
+ unsigned idx;
+ const u64 *le = (q == Q_R1 || q == Q_R2)
+ ? (u64 *) sky2->rx_le : (u64 *) sky2->tx_le;
- imask = sky2_read32(hw, B0_IMSK);
- imask &= ~mask;
- sky2_write32(hw, B0_IMSK, imask);
+ idx = sky2_read16(hw, Y2_QADDR(q, PREF_UNIT_GET_IDX));
+ printk(KERN_ERR PFX "%s: descriptor error q=%#x get=%u [%llx] put=%u\n",
+ dev->name, (unsigned) q, idx, (unsigned long long) le[idx],
+ (unsigned) sky2_read16(hw, Y2_QADDR(q, PREF_UNIT_PUT_IDX)));
- if (dev) {
- spin_lock(&sky2->phy_lock);
- sky2_link_down(sky2);
- spin_unlock(&sky2->phy_lock);
- }
+ sky2_write32(hw, Q_ADDR(q, Q_CSR), BMU_CLR_IRQ_CHK);
}
/* If idle then force a fake soft NAPI poll once a second
mod_timer(&hw->idle_timer, jiffies + msecs_to_jiffies(idle_timeout));
}
-
-static int sky2_poll(struct net_device *dev0, int *budget)
+/* Hardware/software error handling */
+static void sky2_err_intr(struct sky2_hw *hw, u32 status)
{
- struct sky2_hw *hw = ((struct sky2_port *) netdev_priv(dev0))->hw;
- int work_limit = min(dev0->quota, *budget);
- int work_done = 0;
- u32 status = sky2_read32(hw, B0_Y2_SP_EISR);
+ if (net_ratelimit())
+ dev_warn(&hw->pdev->dev, "error interrupt status=%#x\n", status);
if (status & Y2_IS_HW_ERR)
sky2_hw_intr(hw);
- if (status & Y2_IS_IRQ_PHY1)
- sky2_phy_intr(hw, 0);
-
- if (status & Y2_IS_IRQ_PHY2)
- sky2_phy_intr(hw, 1);
-
if (status & Y2_IS_IRQ_MAC1)
sky2_mac_intr(hw, 0);
sky2_mac_intr(hw, 1);
if (status & Y2_IS_CHK_RX1)
- sky2_descriptor_error(hw, 0, "receive", Y2_IS_CHK_RX1);
+ sky2_le_error(hw, 0, Q_R1, RX_LE_SIZE);
if (status & Y2_IS_CHK_RX2)
- sky2_descriptor_error(hw, 1, "receive", Y2_IS_CHK_RX2);
+ sky2_le_error(hw, 1, Q_R2, RX_LE_SIZE);
if (status & Y2_IS_CHK_TXA1)
- sky2_descriptor_error(hw, 0, "transmit", Y2_IS_CHK_TXA1);
+ sky2_le_error(hw, 0, Q_XA1, TX_RING_SIZE);
if (status & Y2_IS_CHK_TXA2)
- sky2_descriptor_error(hw, 1, "transmit", Y2_IS_CHK_TXA2);
+ sky2_le_error(hw, 1, Q_XA2, TX_RING_SIZE);
+}
+
+static int sky2_poll(struct net_device *dev0, int *budget)
+{
+ struct sky2_hw *hw = ((struct sky2_port *) netdev_priv(dev0))->hw;
+ int work_limit = min(dev0->quota, *budget);
+ int work_done = 0;
+ u32 status = sky2_read32(hw, B0_Y2_SP_EISR);
+
+ if (unlikely(status & Y2_IS_ERROR))
+ sky2_err_intr(hw, status);
+
+ if (status & Y2_IS_IRQ_PHY1)
+ sky2_phy_intr(hw, 0);
+
+ if (status & Y2_IS_IRQ_PHY2)
+ sky2_phy_intr(hw, 1);
work_done = sky2_status_intr(hw, work_limit);
if (work_done < work_limit) {
int i;
/* disable ASF */
- if (hw->chip_id <= CHIP_ID_YUKON_EC) {
- if (hw->chip_id == CHIP_ID_YUKON_EX) {
- status = sky2_read16(hw, HCU_CCSR);
- status &= ~(HCU_CCSR_AHB_RST | HCU_CCSR_CPU_RST_MODE |
- HCU_CCSR_UC_STATE_MSK);
- sky2_write16(hw, HCU_CCSR, status);
- } else
- sky2_write8(hw, B28_Y2_ASF_STAT_CMD, Y2_ASF_RESET);
- sky2_write16(hw, B0_CTST, Y2_ASF_DISABLE);
- }
+ if (hw->chip_id == CHIP_ID_YUKON_EX) {
+ status = sky2_read16(hw, HCU_CCSR);
+ status &= ~(HCU_CCSR_AHB_RST | HCU_CCSR_CPU_RST_MODE |
+ HCU_CCSR_UC_STATE_MSK);
+ sky2_write16(hw, HCU_CCSR, status);
+ } else
+ sky2_write8(hw, B28_Y2_ASF_STAT_CMD, Y2_ASF_RESET);
+ sky2_write16(hw, B0_CTST, Y2_ASF_DISABLE);
/* do a SW reset */
sky2_write8(hw, B0_CTST, CS_RST_SET);
regs->len - B3_RI_WTO_R1);
}
+/* In order to do Jumbo packets on these chips, need to turn off the
+ * transmit store/forward. Therefore checksum offload won't work.
+ */
+static int no_tx_offload(struct net_device *dev)
+{
+ const struct sky2_port *sky2 = netdev_priv(dev);
+ const struct sky2_hw *hw = sky2->hw;
+
+ return dev->mtu > ETH_DATA_LEN &&
+ (hw->chip_id == CHIP_ID_YUKON_EX
+ || hw->chip_id == CHIP_ID_YUKON_EC_U);
+}
+
+static int sky2_set_tx_csum(struct net_device *dev, u32 data)
+{
+ if (data && no_tx_offload(dev))
+ return -EINVAL;
+
+ return ethtool_op_set_tx_csum(dev, data);
+}
+
+
+static int sky2_set_tso(struct net_device *dev, u32 data)
+{
+ if (data && no_tx_offload(dev))
+ return -EINVAL;
+
+ return ethtool_op_set_tso(dev, data);
+}
+
static const struct ethtool_ops sky2_ethtool_ops = {
.get_settings = sky2_get_settings,
.set_settings = sky2_set_settings,
.get_sg = ethtool_op_get_sg,
.set_sg = ethtool_op_set_sg,
.get_tx_csum = ethtool_op_get_tx_csum,
- .set_tx_csum = ethtool_op_set_tx_csum,
+ .set_tx_csum = sky2_set_tx_csum,
.get_tso = ethtool_op_get_tso,
- .set_tso = ethtool_op_set_tso,
+ .set_tso = sky2_set_tso,
.get_rx_csum = sky2_get_rx_csum,
.set_rx_csum = sky2_set_rx_csum,
.get_strings = sky2_get_strings,
goto out;
pci_enable_wake(pdev, PCI_D0, 0);
+
+ /* Re-enable all clocks */
+ if (hw->chip_id == CHIP_ID_YUKON_EX || hw->chip_id == CHIP_ID_YUKON_EC_U)
+ sky2_pci_write32(hw, PCI_DEV_REG3, 0);
+
sky2_reset(hw);
sky2_write32(hw, B0_IMSK, Y2_IS_BASE);
| Y2_IS_CHK_TXA1 | Y2_IS_CHK_RX1,
Y2_IS_PORT_2 = Y2_IS_IRQ_PHY2 | Y2_IS_IRQ_MAC2
| Y2_IS_CHK_TXA2 | Y2_IS_CHK_RX2,
+ Y2_IS_ERROR = Y2_IS_HW_ERR |
+ Y2_IS_IRQ_MAC1 | Y2_IS_CHK_TXA1 | Y2_IS_CHK_RX1 |
+ Y2_IS_IRQ_MAC2 | Y2_IS_CHK_TXA2 | Y2_IS_CHK_RX2,
};
/* B2_IRQM_HWE_MSK 32 bit IRQ Moderation HW Error Mask */
TX_GMF_RP = 0x0d70,/* 32 bit Tx GMAC FIFO Read Pointer */
TX_GMF_RSTP = 0x0d74,/* 32 bit Tx GMAC FIFO Restart Pointer */
TX_GMF_RLEV = 0x0d78,/* 32 bit Tx GMAC FIFO Read Level */
+
+ /* Threshold values for Yukon-EC Ultra and Extreme */
+ ECU_AE_THR = 0x0070, /* Almost Empty Threshold */
+ ECU_TXFF_LEV = 0x01a0, /* Tx BMU FIFO Level */
+ ECU_JUMBO_WM = 0x0080, /* Jumbo Mode Watermark */
};
/* Descriptor Poll Timer Registers */
TX_VLAN_TAG_ON = 1<<25,/* enable VLAN tagging */
TX_VLAN_TAG_OFF = 1<<24,/* disable VLAN tagging */
+ TX_JUMBO_ENA = 1<<23,/* PCI Jumbo Mode enable (Yukon-EC Ultra) */
+ TX_JUMBO_DIS = 1<<22,/* PCI Jumbo Mode enable (Yukon-EC Ultra) */
+
GMF_WSP_TST_ON = 1<<18,/* Write Shadow Pointer Test On */
GMF_WSP_TST_OFF = 1<<17,/* Write Shadow Pointer Test Off */
GMF_WSP_STEP = 1<<16,/* Write Shadow Pointer Step/Increment */
SPIDER_NET_DESCR_CARDOWNED | SPIDER_NET_DMAC_NOCS;
spin_unlock_irqrestore(&chain->lock, flags);
- if (skb->protocol == htons(ETH_P_IP))
+ if (skb->protocol == htons(ETH_P_IP) && skb->ip_summed == CHECKSUM_PARTIAL)
switch (skb->nh.iph->protocol) {
case IPPROTO_TCP:
hwdescr->dmac_cmd_status |= SPIDER_NET_DMAC_TCP;
/* XXX - leak? */
MEM = dvma_malloc_align(sizeof(struct lance_memory), 0x10000);
+ if (MEM == NULL) {
+#ifdef CONFIG_SUN3
+ iounmap((void __iomem *)ioaddr);
+#endif
+ printk(KERN_WARNING "SUN3 Lance couldn't allocate DVMA memory\n");
+ return 0;
+ }
lp->iobase = (volatile unsigned short *)ioaddr;
dev->base_addr = (unsigned long)ioaddr; /* informational only */
REGA(CSR0) = CSR0_STOP;
- request_irq(LANCE_IRQ, lance_interrupt, IRQF_DISABLED, "SUN3 Lance", dev);
+ if (request_irq(LANCE_IRQ, lance_interrupt, IRQF_DISABLED, "SUN3 Lance", dev) < 0) {
+#ifdef CONFIG_SUN3
+ iounmap((void __iomem *)ioaddr);
+#endif
+ dvma_free((void *)MEM);
+ printk(KERN_WARNING "SUN3 Lance unable to allocate IRQ\n");
+ return 0;
+ }
dev->irq = (unsigned short)LANCE_IRQ;
return &gp->net_stats;
}
+static int gem_set_mac_address(struct net_device *dev, void *addr)
+{
+ struct sockaddr *macaddr = (struct sockaddr *) addr;
+ struct gem *gp = dev->priv;
+ unsigned char *e = &dev->dev_addr[0];
+
+ if (!is_valid_ether_addr(macaddr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (!netif_running(dev) || !netif_device_present(dev)) {
+ /* We'll just catch it later when the
+ * device is up'd or resumed.
+ */
+ memcpy(dev->dev_addr, macaddr->sa_data, dev->addr_len);
+ return 0;
+ }
+
+ mutex_lock(&gp->pm_mutex);
+ memcpy(dev->dev_addr, macaddr->sa_data, dev->addr_len);
+ if (gp->running) {
+ writel((e[4] << 8) | e[5], gp->regs + MAC_ADDR0);
+ writel((e[2] << 8) | e[3], gp->regs + MAC_ADDR1);
+ writel((e[0] << 8) | e[1], gp->regs + MAC_ADDR2);
+ }
+ mutex_unlock(&gp->pm_mutex);
+
+ return 0;
+}
+
static void gem_set_multicast(struct net_device *dev)
{
struct gem *gp = dev->priv;
dev->change_mtu = gem_change_mtu;
dev->irq = pdev->irq;
dev->dma = 0;
+ dev->set_mac_address = gem_set_mac_address;
#ifdef CONFIG_NET_POLL_CONTROLLER
dev->poll_controller = gem_poll_controller;
#endif
struct happy_meal *hp = dev_get_drvdata(&dev->dev);
struct net_device *net_dev = hp->dev;
- unregister_netdevice(net_dev);
+ unregister_netdev(net_dev);
/* XXX qfe parent interrupt... */
struct lance_private *lp = dev_get_drvdata(&sun4_sdev.ofdev.dev);
struct net_device *net_dev = lp->dev;
- unregister_netdevice(net_dev);
+ unregister_netdev(net_dev);
lance_free_hwresources(lp);
struct lance_private *lp = dev_get_drvdata(&dev->dev);
struct net_device *net_dev = lp->dev;
- unregister_netdevice(net_dev);
+ unregister_netdev(net_dev);
lance_free_hwresources(lp);
if (!dev)
return -ENOMEM;
+ memcpy(dev->dev_addr, idprom->id_ethaddr, 6);
+
qe = netdev_priv(dev);
i = of_getintprop_default(sdev->ofdev.node, "channel#", -1);
struct sunqe *qp = dev_get_drvdata(&dev->dev);
struct net_device *net_dev = qp->dev;
- unregister_netdevice(net_dev);
+ unregister_netdev(net_dev);
sbus_iounmap(qp->qcregs, CREG_REG_SIZE);
sbus_iounmap(qp->mregs, MREGS_REG_SIZE);
#define DRV_MODULE_NAME "tg3"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "3.74"
-#define DRV_MODULE_RELDATE "February 20, 2007"
+#define DRV_MODULE_VERSION "3.75"
+#define DRV_MODULE_RELDATE "March 23, 2007"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
* Reading the PCI State register will confirm whether the
* interrupt is ours and will flush the status block.
*/
- if ((sblk->status & SD_STATUS_UPDATED) ||
- !(tr32(TG3PCI_PCISTATE) & PCISTATE_INT_NOT_ACTIVE)) {
- /*
- * Writing any value to intr-mbox-0 clears PCI INTA# and
- * chip-internal interrupt pending events.
- * Writing non-zero to intr-mbox-0 additional tells the
- * NIC to stop sending us irqs, engaging "in-intr-handler"
- * event coalescing.
- */
- tw32_mailbox(MAILBOX_INTERRUPT_0 + TG3_64BIT_REG_LOW,
- 0x00000001);
- if (tg3_irq_sync(tp))
+ if (unlikely(!(sblk->status & SD_STATUS_UPDATED))) {
+ if ((tp->tg3_flags & TG3_FLAG_CHIP_RESETTING) ||
+ (tr32(TG3PCI_PCISTATE) & PCISTATE_INT_NOT_ACTIVE)) {
+ handled = 0;
goto out;
- sblk->status &= ~SD_STATUS_UPDATED;
- if (likely(tg3_has_work(tp))) {
- prefetch(&tp->rx_rcb[tp->rx_rcb_ptr]);
- netif_rx_schedule(dev); /* schedule NAPI poll */
- } else {
- /* No work, shared interrupt perhaps? re-enable
- * interrupts, and flush that PCI write
- */
- tw32_mailbox_f(MAILBOX_INTERRUPT_0 + TG3_64BIT_REG_LOW,
- 0x00000000);
}
- } else { /* shared interrupt */
- handled = 0;
+ }
+
+ /*
+ * Writing any value to intr-mbox-0 clears PCI INTA# and
+ * chip-internal interrupt pending events.
+ * Writing non-zero to intr-mbox-0 additional tells the
+ * NIC to stop sending us irqs, engaging "in-intr-handler"
+ * event coalescing.
+ */
+ tw32_mailbox(MAILBOX_INTERRUPT_0 + TG3_64BIT_REG_LOW, 0x00000001);
+ if (tg3_irq_sync(tp))
+ goto out;
+ sblk->status &= ~SD_STATUS_UPDATED;
+ if (likely(tg3_has_work(tp))) {
+ prefetch(&tp->rx_rcb[tp->rx_rcb_ptr]);
+ netif_rx_schedule(dev); /* schedule NAPI poll */
+ } else {
+ /* No work, shared interrupt perhaps? re-enable
+ * interrupts, and flush that PCI write
+ */
+ tw32_mailbox_f(MAILBOX_INTERRUPT_0 + TG3_64BIT_REG_LOW,
+ 0x00000000);
}
out:
return IRQ_RETVAL(handled);
* Reading the PCI State register will confirm whether the
* interrupt is ours and will flush the status block.
*/
- if ((sblk->status_tag != tp->last_tag) ||
- !(tr32(TG3PCI_PCISTATE) & PCISTATE_INT_NOT_ACTIVE)) {
- /*
- * writing any value to intr-mbox-0 clears PCI INTA# and
- * chip-internal interrupt pending events.
- * writing non-zero to intr-mbox-0 additional tells the
- * NIC to stop sending us irqs, engaging "in-intr-handler"
- * event coalescing.
- */
- tw32_mailbox(MAILBOX_INTERRUPT_0 + TG3_64BIT_REG_LOW,
- 0x00000001);
- if (tg3_irq_sync(tp))
+ if (unlikely(sblk->status_tag == tp->last_tag)) {
+ if ((tp->tg3_flags & TG3_FLAG_CHIP_RESETTING) ||
+ (tr32(TG3PCI_PCISTATE) & PCISTATE_INT_NOT_ACTIVE)) {
+ handled = 0;
goto out;
- if (netif_rx_schedule_prep(dev)) {
- prefetch(&tp->rx_rcb[tp->rx_rcb_ptr]);
- /* Update last_tag to mark that this status has been
- * seen. Because interrupt may be shared, we may be
- * racing with tg3_poll(), so only update last_tag
- * if tg3_poll() is not scheduled.
- */
- tp->last_tag = sblk->status_tag;
- __netif_rx_schedule(dev);
}
- } else { /* shared interrupt */
- handled = 0;
+ }
+
+ /*
+ * writing any value to intr-mbox-0 clears PCI INTA# and
+ * chip-internal interrupt pending events.
+ * writing non-zero to intr-mbox-0 additional tells the
+ * NIC to stop sending us irqs, engaging "in-intr-handler"
+ * event coalescing.
+ */
+ tw32_mailbox(MAILBOX_INTERRUPT_0 + TG3_64BIT_REG_LOW, 0x00000001);
+ if (tg3_irq_sync(tp))
+ goto out;
+ if (netif_rx_schedule_prep(dev)) {
+ prefetch(&tp->rx_rcb[tp->rx_rcb_ptr]);
+ /* Update last_tag to mark that this status has been
+ * seen. Because interrupt may be shared, we may be
+ * racing with tg3_poll(), so only update last_tag
+ * if tg3_poll() is not scheduled.
+ */
+ tp->last_tag = sblk->status_tag;
+ __netif_rx_schedule(dev);
}
out:
return IRQ_RETVAL(handled);
if (write_op == tg3_write_flush_reg32)
tp->write32 = tg3_write32;
+ /* Prevent the irq handler from reading or writing PCI registers
+ * during chip reset when the memory enable bit in the PCI command
+ * register may be cleared. The chip does not generate interrupt
+ * at this time, but the irq handler may still be called due to irq
+ * sharing or irqpoll.
+ */
+ tp->tg3_flags |= TG3_FLAG_CHIP_RESETTING;
+ if (tp->hw_status) {
+ tp->hw_status->status = 0;
+ tp->hw_status->status_tag = 0;
+ }
+ tp->last_tag = 0;
+ smp_mb();
+ synchronize_irq(tp->pdev->irq);
+
/* do the reset */
val = GRC_MISC_CFG_CORECLK_RESET;
pci_restore_state(tp->pdev);
+ tp->tg3_flags &= ~TG3_FLAG_CHIP_RESETTING;
+
/* Make sure PCI-X relaxed ordering bit is clear. */
pci_read_config_dword(tp->pdev, TG3PCI_X_CAPS, &val);
val &= ~PCIX_CAPS_RELAXED_ORDERING;
RDMAC_MODE_ADDROFLOW_ENAB | RDMAC_MODE_FIFOOFLOW_ENAB |
RDMAC_MODE_FIFOURUN_ENAB | RDMAC_MODE_FIFOOREAD_ENAB |
RDMAC_MODE_LNGREAD_ENAB);
- if (tp->tg3_flags & TG3_FLAG_SPLIT_MODE)
- rdmac_mode |= RDMAC_MODE_SPLIT_ENABLE;
/* If statement applies to 5705 and 5750 PCI devices only */
if ((GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5705 &&
} else if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5704) {
val &= ~(PCIX_CAPS_SPLIT_MASK | PCIX_CAPS_BURST_MASK);
val |= (PCIX_CAPS_MAX_BURST_CPIOB << PCIX_CAPS_BURST_SHIFT);
- if (tp->tg3_flags & TG3_FLAG_SPLIT_MODE)
- val |= (tp->split_mode_max_reqs <<
- PCIX_CAPS_SPLIT_SHIFT);
}
tw32(TG3PCI_X_CAPS, val);
}
grc_misc_cfg = tr32(GRC_MISC_CFG);
grc_misc_cfg &= GRC_MISC_CFG_BOARD_ID_MASK;
- /* Broadcom's driver says that CIOBE multisplit has a bug */
-#if 0
- if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5704 &&
- grc_misc_cfg == GRC_MISC_CFG_BOARD_ID_5704CIOBE) {
- tp->tg3_flags |= TG3_FLAG_SPLIT_MODE;
- tp->split_mode_max_reqs = SPLIT_MODE_5704_MAX_REQ;
- }
-#endif
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5705 &&
(grc_misc_cfg == GRC_MISC_CFG_BOARD_ID_5788 ||
grc_misc_cfg == GRC_MISC_CFG_BOARD_ID_5788M))
i == 5 ? '\n' : ':');
printk(KERN_INFO "%s: RXcsums[%d] LinkChgREG[%d] "
- "MIirq[%d] ASF[%d] Split[%d] WireSpeed[%d] "
- "TSOcap[%d] \n",
+ "MIirq[%d] ASF[%d] WireSpeed[%d] TSOcap[%d]\n",
dev->name,
(tp->tg3_flags & TG3_FLAG_RX_CHECKSUMS) != 0,
(tp->tg3_flags & TG3_FLAG_USE_LINKCHG_REG) != 0,
(tp->tg3_flags & TG3_FLAG_USE_MI_INTERRUPT) != 0,
(tp->tg3_flags & TG3_FLAG_ENABLE_ASF) != 0,
- (tp->tg3_flags & TG3_FLAG_SPLIT_MODE) != 0,
(tp->tg3_flags2 & TG3_FLG2_NO_ETH_WIRE_SPEED) == 0,
(tp->tg3_flags2 & TG3_FLG2_TSO_CAPABLE) != 0);
printk(KERN_INFO "%s: dma_rwctrl[%08x] dma_mask[%d-bit]\n",
#define TG3_FLAG_40BIT_DMA_BUG 0x08000000
#define TG3_FLAG_BROKEN_CHECKSUMS 0x10000000
#define TG3_FLAG_GOT_SERDES_FLOWCTL 0x20000000
-#define TG3_FLAG_SPLIT_MODE 0x40000000
+#define TG3_FLAG_CHIP_RESETTING 0x40000000
#define TG3_FLAG_INIT_COMPLETE 0x80000000
u32 tg3_flags2;
#define TG3_FLG2_RESTART_TIMER 0x00000001
#define TG3_FLG2_NO_FWARE_REPORTED 0x40000000
#define TG3_FLG2_PHY_ADJUST_TRIM 0x80000000
- u32 split_mode_max_reqs;
-#define SPLIT_MODE_5704_MAX_REQ 3
-
struct timer_list timer;
u16 timer_counter;
u16 timer_multiplier;
* which references it.
****************************************************************************/
-static int __init ibmtr_probe(struct net_device *dev)
+static int __devinit ibmtr_probe(struct net_device *dev)
{
int i;
int base_addr = dev->base_addr;
return -ENODEV;
}
-int __init ibmtr_probe_card(struct net_device *dev)
+int __devinit ibmtr_probe_card(struct net_device *dev)
{
int err = ibmtr_probe(dev);
if (!err) {
/* Structure/enum declaration ------------------------------- */
struct tx_desc {
- u32 tdes0, tdes1, tdes2, tdes3; /* Data for the card */
+ __le32 tdes0, tdes1, tdes2, tdes3; /* Data for the card */
char *tx_buf_ptr; /* Data for us */
struct tx_desc *next_tx_desc;
} __attribute__(( aligned(32) ));
struct rx_desc {
- u32 rdes0, rdes1, rdes2, rdes3; /* Data for the card */
+ __le32 rdes0, rdes1, rdes2, rdes3; /* Data for the card */
struct sk_buff *rx_skb_ptr; /* Data for us */
struct rx_desc *next_rx_desc;
} __attribute__(( aligned(32) ));
/* read 64 word srom data */
for (i = 0; i < 64; i++)
- ((u16 *) db->srom)[i] =
+ ((__le16 *) db->srom)[i] =
cpu_to_le16(read_srom_word(db->ioaddr, i));
/* Set Node address */
if (bd == ugeth->confBd[txQ]) {
if (!netif_queue_stopped(dev))
netif_stop_queue(dev);
- return NETDEV_TX_BUSY;
}
ugeth->txBd[txQ] = bd;
spin_unlock_irq(&ugeth->lock);
- return NETDEV_TX_OK;
+ return 0;
}
static int ucc_geth_rx(struct ucc_geth_private *ugeth, u8 rxQ, int rx_work_limit)
+++ /dev/null
-#ifndef _LMC_MEDIA_H_
-#define _LMC_MEDIA_H_
-
-lmc_media_t lmc_ds3_media = {
- lmc_ds3_init, /* special media init stuff */
- lmc_ds3_default, /* reset to default state */
- lmc_ds3_set_status, /* reset status to state provided */
- lmc_dummy_set_1, /* set clock source */
- lmc_dummy_set2_1, /* set line speed */
- lmc_ds3_set_100ft, /* set cable length */
- lmc_ds3_set_scram, /* set scrambler */
- lmc_ds3_get_link_status, /* get link status */
- lmc_dummy_set_1, /* set link status */
- lmc_ds3_set_crc_length, /* set CRC length */
- lmc_dummy_set_1, /* set T1 or E1 circuit type */
- lmc_ds3_watchdog
-};
-
-lmc_media_t lmc_hssi_media = {
- lmc_hssi_init, /* special media init stuff */
- lmc_hssi_default, /* reset to default state */
- lmc_hssi_set_status, /* reset status to state provided */
- lmc_hssi_set_clock, /* set clock source */
- lmc_dummy_set2_1, /* set line speed */
- lmc_dummy_set_1, /* set cable length */
- lmc_dummy_set_1, /* set scrambler */
- lmc_hssi_get_link_status, /* get link status */
- lmc_hssi_set_link_status, /* set link status */
- lmc_hssi_set_crc_length, /* set CRC length */
- lmc_dummy_set_1, /* set T1 or E1 circuit type */
- lmc_hssi_watchdog
-};
-
-lmc_media_t lmc_ssi_media = { lmc_ssi_init, /* special media init stuff */
- lmc_ssi_default, /* reset to default state */
- lmc_ssi_set_status, /* reset status to state provided */
- lmc_ssi_set_clock, /* set clock source */
- lmc_ssi_set_speed, /* set line speed */
- lmc_dummy_set_1, /* set cable length */
- lmc_dummy_set_1, /* set scrambler */
- lmc_ssi_get_link_status, /* get link status */
- lmc_ssi_set_link_status, /* set link status */
- lmc_ssi_set_crc_length, /* set CRC length */
- lmc_dummy_set_1, /* set T1 or E1 circuit type */
- lmc_ssi_watchdog
-};
-
-lmc_media_t lmc_t1_media = {
- lmc_t1_init, /* special media init stuff */
- lmc_t1_default, /* reset to default state */
- lmc_t1_set_status, /* reset status to state provided */
- lmc_t1_set_clock, /* set clock source */
- lmc_dummy_set2_1, /* set line speed */
- lmc_dummy_set_1, /* set cable length */
- lmc_dummy_set_1, /* set scrambler */
- lmc_t1_get_link_status, /* get link status */
- lmc_dummy_set_1, /* set link status */
- lmc_t1_set_crc_length, /* set CRC length */
- lmc_t1_set_circuit_type, /* set T1 or E1 circuit type */
- lmc_t1_watchdog
-};
-
-
-#endif
-
if (rc) {
airo_print_err(dev->name, "register interrupt %d failed, rc %d",
irq, rc);
- goto err_out_unlink;
+ goto err_out_nets;
}
if (!is_pcmcia) {
if (!request_region( dev->base_addr, 64, dev->name )) {
release_region( dev->base_addr, 64 );
err_out_irq:
free_irq(dev->irq, dev);
+err_out_nets:
+ airo_networks_free(ai);
err_out_unlink:
del_airo_dev(dev);
err_out_thr:
u8 channel;
struct bcm43xx_phyinfo *phy;
const char *iso_country;
+ u8 max_bg_channel;
geo = kzalloc(sizeof(*geo), GFP_KERNEL);
if (!geo)
}
iso_country = bcm43xx_locale_iso(bcm->sprom.locale);
+/* set the maximum channel based on locale set in sprom or witle locale option */
+ switch (bcm->sprom.locale) {
+ case BCM43xx_LOCALE_THAILAND:
+ case BCM43xx_LOCALE_ISRAEL:
+ case BCM43xx_LOCALE_JORDAN:
+ case BCM43xx_LOCALE_USA_CANADA_ANZ:
+ case BCM43xx_LOCALE_USA_LOW:
+ max_bg_channel = 11;
+ break;
+ case BCM43xx_LOCALE_JAPAN:
+ case BCM43xx_LOCALE_JAPAN_HIGH:
+ max_bg_channel = 14;
+ break;
+ default:
+ max_bg_channel = 13;
+ }
+
if (have_a) {
for (i = 0, channel = IEEE80211_52GHZ_MIN_CHANNEL;
channel <= IEEE80211_52GHZ_MAX_CHANNEL; channel++) {
}
if (have_bg) {
for (i = 0, channel = IEEE80211_24GHZ_MIN_CHANNEL;
- channel <= IEEE80211_24GHZ_MAX_CHANNEL; channel++) {
+ channel <= max_bg_channel; channel++) {
chan = &geo->bg[i++];
chan->freq = bcm43xx_channel_to_freq_bg(channel);
chan->channel = channel;
if (radio->version == 0x2050)
bcm43xx_phy_write(bcm, 0x0038, 0x0667);
- if (phy->type == BCM43xx_PHYTYPE_G) {
+ if (phy->connected) {
if (radio->version == 0x2050) {
bcm43xx_radio_write16(bcm, 0x007A,
bcm43xx_radio_read16(bcm, 0x007A)
{
struct bcm43xx_phyinfo *phy = bcm43xx_current_phy(bcm);
struct bcm43xx_radioinfo *radio = bcm43xx_current_radio(bcm);
- u16 backup_phy[15];
+ u16 backup_phy[15] = {0};
u16 backup_radio[3];
u16 backup_bband;
u16 i;
backup_phy[1] = bcm43xx_phy_read(bcm, 0x0001);
backup_phy[2] = bcm43xx_phy_read(bcm, 0x0811);
backup_phy[3] = bcm43xx_phy_read(bcm, 0x0812);
- backup_phy[4] = bcm43xx_phy_read(bcm, 0x0814);
- backup_phy[5] = bcm43xx_phy_read(bcm, 0x0815);
+ if (phy->rev != 1) {
+ backup_phy[4] = bcm43xx_phy_read(bcm, 0x0814);
+ backup_phy[5] = bcm43xx_phy_read(bcm, 0x0815);
+ }
backup_phy[6] = bcm43xx_phy_read(bcm, 0x005A);
backup_phy[7] = bcm43xx_phy_read(bcm, 0x0059);
backup_phy[8] = bcm43xx_phy_read(bcm, 0x0058);
bcm43xx_phy_read(bcm, 0x0811) | 0x0001);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_phy_read(bcm, 0x0812) & 0xFFFE);
- bcm43xx_phy_write(bcm, 0x0814,
- bcm43xx_phy_read(bcm, 0x0814) | 0x0001);
- bcm43xx_phy_write(bcm, 0x0815,
- bcm43xx_phy_read(bcm, 0x0815) & 0xFFFE);
- bcm43xx_phy_write(bcm, 0x0814,
- bcm43xx_phy_read(bcm, 0x0814) | 0x0002);
- bcm43xx_phy_write(bcm, 0x0815,
- bcm43xx_phy_read(bcm, 0x0815) & 0xFFFD);
+ if (phy->rev != 1) {
+ bcm43xx_phy_write(bcm, 0x0814,
+ bcm43xx_phy_read(bcm, 0x0814) | 0x0001);
+ bcm43xx_phy_write(bcm, 0x0815,
+ bcm43xx_phy_read(bcm, 0x0815) & 0xFFFE);
+ bcm43xx_phy_write(bcm, 0x0814,
+ bcm43xx_phy_read(bcm, 0x0814) | 0x0002);
+ bcm43xx_phy_write(bcm, 0x0815,
+ bcm43xx_phy_read(bcm, 0x0815) & 0xFFFD);
+ }
bcm43xx_phy_write(bcm, 0x0811,
bcm43xx_phy_read(bcm, 0x0811) | 0x000C);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_phy_read(bcm, 0x000A)
| 0x2000);
}
- bcm43xx_phy_write(bcm, 0x0814,
- bcm43xx_phy_read(bcm, 0x0814) | 0x0004);
- bcm43xx_phy_write(bcm, 0x0815,
- bcm43xx_phy_read(bcm, 0x0815) & 0xFFFB);
+ if (phy->rev != 1) {
+ bcm43xx_phy_write(bcm, 0x0814,
+ bcm43xx_phy_read(bcm, 0x0814) | 0x0004);
+ bcm43xx_phy_write(bcm, 0x0815,
+ bcm43xx_phy_read(bcm, 0x0815) & 0xFFFB);
+ }
bcm43xx_phy_write(bcm, 0x0003,
(bcm43xx_phy_read(bcm, 0x0003)
& 0xFF9F) | 0x0040);
}
}
- bcm43xx_phy_write(bcm, 0x0814, backup_phy[4]);
- bcm43xx_phy_write(bcm, 0x0815, backup_phy[5]);
+ if (phy->rev != 1) {
+ bcm43xx_phy_write(bcm, 0x0814, backup_phy[4]);
+ bcm43xx_phy_write(bcm, 0x0815, backup_phy[5]);
+ }
bcm43xx_phy_write(bcm, 0x005A, backup_phy[6]);
bcm43xx_phy_write(bcm, 0x0059, backup_phy[7]);
bcm43xx_phy_write(bcm, 0x0058, backup_phy[8]);
bcm43xx_phy_write(bcm, 0x0811, 0x0000);
bcm43xx_phy_write(bcm, 0x0015, 0x00C0);
}
- if (phy->rev >= 3) {
+ if (phy->rev > 5) {
bcm43xx_phy_write(bcm, 0x0811, 0x0400);
bcm43xx_phy_write(bcm, 0x0015, 0x00C0);
}
- if (phy->connected) {
+ if (phy->rev >= 2 && phy->connected) {
tmp = bcm43xx_phy_read(bcm, 0x0400) & 0xFF;
- if (tmp < 6) {
+ if (tmp ==3 || tmp == 5) {
bcm43xx_phy_write(bcm, 0x04C2, 0x1816);
bcm43xx_phy_write(bcm, 0x04C3, 0x8006);
- if (tmp != 3) {
+ if (tmp == 5) {
bcm43xx_phy_write(bcm, 0x04CC,
(bcm43xx_phy_read(bcm, 0x04CC)
& 0x00FF) | 0x1F00);
}
}
- }
- if (phy->rev < 3 && phy->connected)
bcm43xx_phy_write(bcm, 0x047E, 0x0078);
+ }
if (radio->revision == 8) {
bcm43xx_phy_write(bcm, 0x0801, bcm43xx_phy_read(bcm, 0x0801) | 0x0080);
bcm43xx_phy_write(bcm, 0x043E, bcm43xx_phy_read(bcm, 0x043E) | 0x0004);
if (phy->rev >= 6) {
bcm43xx_phy_write(bcm, 0x0036,
(bcm43xx_phy_read(bcm, 0x0036)
- & 0xF000) | (radio->txctl2 << 12));
+ & 0x0FFF) | (radio->txctl2 << 12));
}
if (bcm->sprom.boardflags & BCM43xx_BFL_PACTRL)
bcm43xx_phy_write(bcm, 0x002E, 0x8075);
else
bcm43xx_phy_write(bcm, 0x002F, 0x0202);
}
- if (phy->connected) {
+ if (phy->connected || phy->rev >= 2) {
bcm43xx_phy_lo_adjust(bcm, 0);
bcm43xx_phy_write(bcm, 0x080F, 0x8078);
}
*/
bcm43xx_nrssi_hw_update(bcm, 0xFFFF);
bcm43xx_calc_nrssi_threshold(bcm);
- } else if (phy->connected) {
+ } else if (phy->connected || phy->rev >= 2) {
if (radio->nrssi[0] == -1000) {
assert(radio->nrssi[1] == -1000);
bcm43xx_calc_nrssi_slope(bcm);
bcm43xx_phy_write(bcm, 0x005A, 0x0480);
bcm43xx_phy_write(bcm, 0x0059, 0x0810);
bcm43xx_phy_write(bcm, 0x0058, 0x000D);
- if (phy->rev == 0) {
+ if (phy->analog == 0) {
bcm43xx_phy_write(bcm, 0x0003, 0x0122);
} else {
bcm43xx_phy_write(bcm, 0x000A,
nrssi0 = (s16)bcm43xx_phy_read(bcm, 0x0027);
bcm43xx_radio_write16(bcm, 0x007A,
bcm43xx_radio_read16(bcm, 0x007A) & 0x007F);
- if (phy->rev >= 2) {
+ if (phy->analog >= 2) {
bcm43xx_write16(bcm, 0x03E6, 0x0040);
- } else if (phy->rev == 0) {
+ } else if (phy->analog == 0) {
bcm43xx_write16(bcm, 0x03E6, 0x0122);
} else {
bcm43xx_write16(bcm, BCM43xx_MMIO_CHANNEL_EXT,
bcm43xx_phy_write(bcm, 0x0015, backup[5]);
bcm43xx_phy_write(bcm, 0x002A, backup[6]);
bcm43xx_synth_pu_workaround(bcm, radio->channel);
- if (phy->rev != 0)
+ if (phy->analog != 0)
bcm43xx_write16(bcm, 0x03F4, backup[13]);
bcm43xx_phy_write(bcm, 0x0020, backup[7]);
bcm43xx_radio_write16(bcm, 0x007A,
bcm43xx_radio_read16(bcm, 0x007A) & 0x007F);
- if (phy->rev >= 2) {
+ if (phy->analog >= 2) {
bcm43xx_phy_write(bcm, 0x0003,
(bcm43xx_phy_read(bcm, 0x0003)
& 0xFF9F) | 0x0040);
{
u32 *stackptr = &(_stackptr[*stackidx]);
- assert((offset & 0xF000) == 0x0000);
- assert((id & 0xF0) == 0x00);
+ assert((offset & 0xE000) == 0x0000);
+ assert((id & 0xF8) == 0x00);
*stackptr = offset;
- *stackptr |= ((u32)id) << 12;
+ *stackptr |= ((u32)id) << 13;
*stackptr |= ((u32)value) << 16;
(*stackidx)++;
assert(*stackidx < BCM43xx_INTERFSTACK_SIZE);
{
size_t i;
- assert((offset & 0xF000) == 0x0000);
- assert((id & 0xF0) == 0x00);
+ assert((offset & 0xE000) == 0x0000);
+ assert((id & 0xF8) == 0x00);
for (i = 0; i < BCM43xx_INTERFSTACK_SIZE; i++, stackptr++) {
- if ((*stackptr & 0x00000FFF) != offset)
+ if ((*stackptr & 0x00001FFF) != offset)
continue;
- if (((*stackptr & 0x0000F000) >> 12) != id)
+ if (((*stackptr & 0x00007000) >> 13) != id)
continue;
return ((*stackptr & 0xFFFF0000) >> 16);
}
for (i = 0; i < 5; i++) {
for (j = 0; j < 5; j++) {
- if (tmp == (data_high[i] << 4 | data_low[j])) {
+ if (tmp == (data_high[i] | data_low[j])) {
bcm43xx_phy_write(bcm, 0x0069, (i - j) << 8 | 0x00C0);
return;
}
chip->patch_cr157 = (value >> 13) & 0x1;
chip->patch_6m_band_edge = (value >> 21) & 0x1;
chip->new_phy_layout = (value >> 31) & 0x1;
+ chip->al2230s_bit = (value >> 7) & 0x1;
chip->link_led = ((value >> 4) & 1) ? LED1 : LED2;
chip->supports_tx_led = 1;
if (value & (1 << 24)) { /* LED scenario */
return r;
}
-/* CR157 can be optionally patched by the EEPROM */
+/* CR157 can be optionally patched by the EEPROM for original ZD1211 */
static int patch_cr157(struct zd_chip *chip)
{
int r;
- u32 value;
+ u16 value;
if (!chip->patch_cr157)
return 0;
- r = zd_ioread32_locked(chip, &value, E2P_PHY_REG);
+ r = zd_ioread16_locked(chip, &value, E2P_PHY_REG);
if (r)
return r;
goto out;
r = zd_iowrite16a_locked(chip, ioreqs, ARRAY_SIZE(ioreqs));
- if (r)
- goto unlock;
-
- r = patch_cr157(chip);
-unlock:
t = zd_chip_unlock_phy_regs(chip);
if (t && !r)
r = t;
* also only 11 channels. */
#define E2P_ALLOWED_CHANNEL E2P_DATA(0x18)
-#define E2P_PHY_REG E2P_DATA(0x1a)
#define E2P_DEVICE_VER E2P_DATA(0x20)
+#define E2P_PHY_REG E2P_DATA(0x25)
#define E2P_36M_CAL_VALUE1 E2P_DATA(0x28)
#define E2P_36M_CAL_VALUE2 E2P_DATA(0x2a)
#define E2P_36M_CAL_VALUE3 E2P_DATA(0x2c)
u16 link_led;
unsigned int pa_type:4,
patch_cck_gain:1, patch_cr157:1, patch_6m_band_edge:1,
- new_phy_layout:1,
+ new_phy_layout:1, al2230s_bit:1,
is_zd1211b:1, supports_tx_led:1;
};
{
struct zd_chip *chip = zd_rf_to_chip(rf);
+ if (chip->al2230s_bit) {
+ dev_err(zd_chip_dev(chip), "AL2230S devices are not yet "
+ "supported by this driver.\n");
+ return -ENODEV;
+ }
+
rf->switch_radio_off = al2230_switch_radio_off;
if (chip->is_zd1211b) {
rf->init_hw = zd1211b_al2230_init_hw;
{ USB_DEVICE(0x0471, 0x1236), .driver_info = DEVICE_ZD1211B },
{ USB_DEVICE(0x13b1, 0x0024), .driver_info = DEVICE_ZD1211B },
{ USB_DEVICE(0x0586, 0x340f), .driver_info = DEVICE_ZD1211B },
+ { USB_DEVICE(0x0baf, 0x0121), .driver_info = DEVICE_ZD1211B },
/* "Driverless" devices that need ejecting */
{ USB_DEVICE(0x0ace, 0x2011), .driver_info = DEVICE_INSTALLER },
{}
int alloc_event_buffer(void)
{
int err = -ENOMEM;
+ unsigned long flags;
- spin_lock(&oprofilefs_lock);
+ spin_lock_irqsave(&oprofilefs_lock, flags);
buffer_size = fs_buffer_size;
buffer_watershed = fs_buffer_watershed;
- spin_unlock(&oprofilefs_lock);
+ spin_unlock_irqrestore(&oprofilefs_lock, flags);
if (buffer_watershed >= buffer_size)
return -EINVAL;
int oprofilefs_ulong_from_user(unsigned long * val, char const __user * buf, size_t count)
{
char tmpbuf[TMPBUFSIZE];
+ unsigned long flags;
if (!count)
return 0;
if (copy_from_user(tmpbuf, buf, count))
return -EFAULT;
- spin_lock(&oprofilefs_lock);
+ spin_lock_irqsave(&oprofilefs_lock, flags);
*val = simple_strtoul(tmpbuf, NULL, 0);
- spin_unlock(&oprofilefs_lock);
+ spin_unlock_irqrestore(&oprofilefs_lock, flags);
return 0;
}
if (!(value_tcr & P_TCR_BUSY))
bits |= PARPORT_STATUS_BUSY;
- dprintk((KERN_DEBUG "tcr 0x%x ir 0x%x\n", regs->p_tcr, regs->p_ir));
+ dprintk((KERN_DEBUG "tcr 0x%x ir 0x%x\n", value_tcr, value_ir));
dprintk((KERN_DEBUG "read status 0x%x\n", bits));
return bits;
}
if (value_or & P_OR_SLCT_IN)
bits |= PARPORT_CONTROL_SELECT;
- dprintk((KERN_DEBUG "tcr 0x%x or 0x%x\n", regs->p_tcr, regs->p_or));
+ dprintk((KERN_DEBUG "tcr 0x%x or 0x%x\n", value_tcr, value_or));
dprintk((KERN_DEBUG "read control 0x%x\n", bits));
return bits;
}
unsigned char value_tcr = sbus_readb(®s->p_tcr);
unsigned char value_or = sbus_readb(®s->p_or);
- dprintk((KERN_DEBUG "frob1: tcr 0x%x or 0x%x\n", regs->p_tcr, regs->p_or));
+ dprintk((KERN_DEBUG "frob1: tcr 0x%x or 0x%x\n",
+ value_tcr, value_or));
if (mask & PARPORT_CONTROL_STROBE) {
if (val & PARPORT_CONTROL_STROBE) {
value_tcr &= ~P_TCR_DS;
sbus_writeb(value_or, ®s->p_or);
sbus_writeb(value_tcr, ®s->p_tcr);
- dprintk((KERN_DEBUG "frob2: tcr 0x%x or 0x%x\n", regs->p_tcr, regs->p_or));
+ dprintk((KERN_DEBUG "frob2: tcr 0x%x or 0x%x\n",
+ value_tcr, value_or));
return parport_sunbpp_read_control(p);
}
int offset = entry->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE +
PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET;
writel(flag, entry->mask_base + offset);
+ readl(entry->mask_base + offset);
break;
}
default:
BUG();
break;
}
+ entry->msi_attrib.masked = !!flag;
}
void read_msi_msg(unsigned int irq, struct msi_msg *msg)
default:
BUG();
}
+ entry->msg = *msg;
}
void mask_msi_irq(unsigned int irq)
}
#ifdef CONFIG_PM
-static int __pci_save_msi_state(struct pci_dev *dev)
-{
- int pos, i = 0;
- u16 control;
- struct pci_cap_saved_state *save_state;
- u32 *cap;
-
- if (!dev->msi_enabled)
- return 0;
-
- pos = pci_find_capability(dev, PCI_CAP_ID_MSI);
- if (pos <= 0)
- return 0;
-
- save_state = kzalloc(sizeof(struct pci_cap_saved_state) + sizeof(u32) * 5,
- GFP_KERNEL);
- if (!save_state) {
- printk(KERN_ERR "Out of memory in pci_save_msi_state\n");
- return -ENOMEM;
- }
- cap = &save_state->data[0];
-
- pci_read_config_dword(dev, pos, &cap[i++]);
- control = cap[0] >> 16;
- pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, &cap[i++]);
- if (control & PCI_MSI_FLAGS_64BIT) {
- pci_read_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, &cap[i++]);
- pci_read_config_dword(dev, pos + PCI_MSI_DATA_64, &cap[i++]);
- } else
- pci_read_config_dword(dev, pos + PCI_MSI_DATA_32, &cap[i++]);
- if (control & PCI_MSI_FLAGS_MASKBIT)
- pci_read_config_dword(dev, pos + PCI_MSI_MASK_BIT, &cap[i++]);
- save_state->cap_nr = PCI_CAP_ID_MSI;
- pci_add_saved_cap(dev, save_state);
- return 0;
-}
-
static void __pci_restore_msi_state(struct pci_dev *dev)
{
- int i = 0, pos;
+ int pos;
u16 control;
- struct pci_cap_saved_state *save_state;
- u32 *cap;
+ struct msi_desc *entry;
if (!dev->msi_enabled)
return;
- save_state = pci_find_saved_cap(dev, PCI_CAP_ID_MSI);
- pos = pci_find_capability(dev, PCI_CAP_ID_MSI);
- if (!save_state || pos <= 0)
- return;
- cap = &save_state->data[0];
+ entry = get_irq_msi(dev->irq);
+ pos = entry->msi_attrib.pos;
pci_intx(dev, 0); /* disable intx */
- control = cap[i++] >> 16;
msi_set_enable(dev, 0);
- pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, cap[i++]);
- if (control & PCI_MSI_FLAGS_64BIT) {
- pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, cap[i++]);
- pci_write_config_dword(dev, pos + PCI_MSI_DATA_64, cap[i++]);
- } else
- pci_write_config_dword(dev, pos + PCI_MSI_DATA_32, cap[i++]);
- if (control & PCI_MSI_FLAGS_MASKBIT)
- pci_write_config_dword(dev, pos + PCI_MSI_MASK_BIT, cap[i++]);
+ write_msi_msg(dev->irq, &entry->msg);
+ if (entry->msi_attrib.maskbit)
+ msi_set_mask_bit(dev->irq, entry->msi_attrib.masked);
+
+ pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &control);
+ control &= ~(PCI_MSI_FLAGS_QSIZE | PCI_MSI_FLAGS_ENABLE);
+ if (entry->msi_attrib.maskbit || !entry->msi_attrib.masked)
+ control |= PCI_MSI_FLAGS_ENABLE;
pci_write_config_word(dev, pos + PCI_MSI_FLAGS, control);
- pci_remove_saved_cap(save_state);
- kfree(save_state);
-}
-
-static int __pci_save_msix_state(struct pci_dev *dev)
-{
- int pos;
- int irq, head, tail = 0;
- u16 control;
- struct pci_cap_saved_state *save_state;
-
- if (!dev->msix_enabled)
- return 0;
-
- pos = pci_find_capability(dev, PCI_CAP_ID_MSIX);
- if (pos <= 0)
- return 0;
-
- /* save the capability */
- pci_read_config_word(dev, msi_control_reg(pos), &control);
- save_state = kzalloc(sizeof(struct pci_cap_saved_state) + sizeof(u16),
- GFP_KERNEL);
- if (!save_state) {
- printk(KERN_ERR "Out of memory in pci_save_msix_state\n");
- return -ENOMEM;
- }
- *((u16 *)&save_state->data[0]) = control;
-
- /* save the table */
- irq = head = dev->first_msi_irq;
- while (head != tail) {
- struct msi_desc *entry;
-
- entry = get_irq_msi(irq);
- read_msi_msg(irq, &entry->msg_save);
-
- tail = entry->link.tail;
- irq = tail;
- }
-
- save_state->cap_nr = PCI_CAP_ID_MSIX;
- pci_add_saved_cap(dev, save_state);
- return 0;
-}
-
-int pci_save_msi_state(struct pci_dev *dev)
-{
- int rc;
-
- rc = __pci_save_msi_state(dev);
- if (rc)
- return rc;
-
- rc = __pci_save_msix_state(dev);
-
- return rc;
}
static void __pci_restore_msix_state(struct pci_dev *dev)
{
- u16 save;
int pos;
int irq, head, tail = 0;
struct msi_desc *entry;
- struct pci_cap_saved_state *save_state;
+ u16 control;
if (!dev->msix_enabled)
return;
- save_state = pci_find_saved_cap(dev, PCI_CAP_ID_MSIX);
- if (!save_state)
- return;
- save = *((u16 *)&save_state->data[0]);
- pci_remove_saved_cap(save_state);
- kfree(save_state);
-
- pos = pci_find_capability(dev, PCI_CAP_ID_MSIX);
- if (pos <= 0)
- return;
-
/* route the table */
pci_intx(dev, 0); /* disable intx */
msix_set_enable(dev, 0);
irq = head = dev->first_msi_irq;
+ entry = get_irq_msi(irq);
+ pos = entry->msi_attrib.pos;
while (head != tail) {
entry = get_irq_msi(irq);
- write_msi_msg(irq, &entry->msg_save);
+ write_msi_msg(irq, &entry->msg);
+ msi_set_mask_bit(irq, entry->msi_attrib.masked);
tail = entry->link.tail;
irq = tail;
}
- pci_write_config_word(dev, msi_control_reg(pos), save);
+ pci_read_config_word(dev, pos + PCI_MSIX_FLAGS, &control);
+ control &= ~PCI_MSIX_FLAGS_MASKALL;
+ control |= PCI_MSIX_FLAGS_ENABLE;
+ pci_write_config_word(dev, pos + PCI_MSIX_FLAGS, control);
}
void pci_restore_msi_state(struct pci_dev *dev)
entry->msi_attrib.is_64 = is_64bit_address(control);
entry->msi_attrib.entry_nr = 0;
entry->msi_attrib.maskbit = is_mask_bit_support(control);
+ entry->msi_attrib.masked = 1;
entry->msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */
entry->msi_attrib.pos = pos;
if (is_mask_bit_support(control)) {
entry->msi_attrib.is_64 = 1;
entry->msi_attrib.entry_nr = j;
entry->msi_attrib.maskbit = 1;
+ entry->msi_attrib.masked = 1;
entry->msi_attrib.default_irq = dev->irq;
entry->msi_attrib.pos = pos;
entry->dev = dev;
if (pos <= 0)
return 0;
- save_state = kzalloc(sizeof(*save_state) + sizeof(u16) * 4, GFP_KERNEL);
+ save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP);
+ if (!save_state)
+ save_state = kzalloc(sizeof(*save_state) + sizeof(u16) * 4, GFP_KERNEL);
if (!save_state) {
dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n");
return -ENOMEM;
pci_write_config_word(dev, pos + PCI_EXP_LNKCTL, cap[i++]);
pci_write_config_word(dev, pos + PCI_EXP_SLTCTL, cap[i++]);
pci_write_config_word(dev, pos + PCI_EXP_RTCTL, cap[i++]);
- pci_remove_saved_cap(save_state);
- kfree(save_state);
}
if (pos <= 0)
return 0;
- save_state = kzalloc(sizeof(*save_state) + sizeof(u16), GFP_KERNEL);
+ save_state = pci_find_saved_cap(dev, PCI_CAP_ID_EXP);
+ if (!save_state)
+ save_state = kzalloc(sizeof(*save_state) + sizeof(u16), GFP_KERNEL);
if (!save_state) {
dev_err(&dev->dev, "Out of memory in pci_save_pcie_state\n");
return -ENOMEM;
cap = (u16 *)&save_state->data[0];
pci_write_config_word(dev, pos + PCI_X_CMD, cap[i++]);
- pci_remove_saved_cap(save_state);
- kfree(save_state);
}
/* XXX: 100% dword access ok here? */
for (i = 0; i < 16; i++)
pci_read_config_dword(dev, i * 4,&dev->saved_config_space[i]);
- if ((i = pci_save_msi_state(dev)) != 0)
- return i;
if ((i = pci_save_pcie_state(dev)) != 0)
return i;
if ((i = pci_save_pcix_state(dev)) != 0)
#endif
#if defined(CONFIG_PCI_MSI) && defined(CONFIG_PM)
-int pci_save_msi_state(struct pci_dev *dev);
void pci_restore_msi_state(struct pci_dev *dev);
#else
-static inline int pci_save_msi_state(struct pci_dev *dev) { return 0; }
static inline void pci_restore_msi_state(struct pci_dev *dev) {}
#endif
if (!dev->irq && dev->pin) {
printk(KERN_WARNING
"%s->Dev[%04x:%04x] has invalid IRQ. Check vendor BIOS\n",
- __FUNCTION__, dev->device, dev->vendor);
+ __FUNCTION__, dev->vendor, dev->device);
}
if (pcie_port_device_register(dev)) {
pci_disable_device(dev);
dev->irq = irq;
}
-static void change_legacy_io_resource(struct pci_dev * dev, unsigned index,
- unsigned start, unsigned end)
-{
- unsigned base = start & PCI_BASE_ADDRESS_IO_MASK;
- unsigned len = (end | ~PCI_BASE_ADDRESS_IO_MASK) - base + 1;
-
- /*
- * Some X versions get confused when the BARs reported through
- * /sys or /proc differ from those seen in config space, thus
- * try to update the config space values, too.
- */
- if (!(pci_resource_flags(dev, index) & IORESOURCE_IO))
- printk(KERN_WARNING "%s: cannot adjust BAR%u (not I/O)\n",
- pci_name(dev), index);
- else if (pci_resource_len(dev, index) != len)
- printk(KERN_WARNING "%s: cannot adjust BAR%u (size %04X)\n",
- pci_name(dev), index, (unsigned)pci_resource_len(dev, index));
- else {
- printk(KERN_INFO "%s: trying to change BAR%u from %04X to %04X\n",
- pci_name(dev), index,
- (unsigned)pci_resource_start(dev, index), base);
- pci_write_config_dword(dev, PCI_BASE_ADDRESS_0 + index * 4, base);
- }
- pci_resource_start(dev, index) = start;
- pci_resource_end(dev, index) = end;
- pci_resource_flags(dev, index) =
- IORESOURCE_IO | IORESOURCE_PCI_FIXED | PCI_BASE_ADDRESS_SPACE_IO;
-}
+#define LEGACY_IO_RESOURCE (IORESOURCE_IO | IORESOURCE_PCI_FIXED)
/**
* pci_setup_device - fill in class and map information of a device
u8 progif;
pci_read_config_byte(dev, PCI_CLASS_PROG, &progif);
if ((progif & 1) == 0) {
- change_legacy_io_resource(dev, 0, 0x1F0, 0x1F7);
- change_legacy_io_resource(dev, 1, 0x3F6, 0x3F6);
+ dev->resource[0].start = 0x1F0;
+ dev->resource[0].end = 0x1F7;
+ dev->resource[0].flags = LEGACY_IO_RESOURCE;
+ dev->resource[1].start = 0x3F6;
+ dev->resource[1].end = 0x3F6;
+ dev->resource[1].flags = LEGACY_IO_RESOURCE;
}
if ((progif & 4) == 0) {
- change_legacy_io_resource(dev, 2, 0x170, 0x177);
- change_legacy_io_resource(dev, 3, 0x376, 0x376);
+ dev->resource[2].start = 0x170;
+ dev->resource[2].end = 0x177;
+ dev->resource[2].flags = LEGACY_IO_RESOURCE;
+ dev->resource[3].start = 0x376;
+ dev->resource[3].end = 0x376;
+ dev->resource[3].flags = LEGACY_IO_RESOURCE;
}
}
break;
* bridge. Unfortunately, this device has no subvendor/subdevice ID. So it
* becomes necessary to do this tweak in two steps -- I've chosen the Host
* bridge as trigger.
+ *
+ * Note that we used to unhide the SMBus that way on Toshiba laptops
+ * (Satellite A40 and Tecra M2) but then found that the thermal management
+ * was done by SMM code, which could cause unsynchronized concurrent
+ * accesses to the SMBus registers, with potentially bad effects. Thus you
+ * should be very careful when adding new entries: if SMM is accessing the
+ * Intel SMBus, this is a very good reason to leave it hidden.
*/
static int asus_hides_smbus;
case 0x099c: /* HP Compaq nx6110 */
asus_hides_smbus = 1;
}
- } else if (unlikely(dev->subsystem_vendor == PCI_VENDOR_ID_TOSHIBA)) {
- if (dev->device == PCI_DEVICE_ID_INTEL_82855GM_HB)
- switch(dev->subsystem_device) {
- case 0x0001: /* Toshiba Satellite A40 */
- asus_hides_smbus = 1;
- }
- else if (dev->device == PCI_DEVICE_ID_INTEL_82855PM_HB)
- switch(dev->subsystem_device) {
- case 0x0001: /* Toshiba Tecra M2 */
- asus_hides_smbus = 1;
- }
} else if (unlikely(dev->subsystem_vendor == PCI_VENDOR_ID_SAMSUNG)) {
if (dev->device == PCI_DEVICE_ID_INTEL_82855PM_HB)
switch(dev->subsystem_device) {
skt->socket.resource_ops = &pccard_static_ops;
skt->socket.ops = &au1x00_pcmcia_operations;
skt->socket.owner = ops->owner;
- skt->socket.dev.dev = dev;
+ skt->socket.dev.parent = dev;
init_timer(&skt->poll_timer);
skt->poll_timer.function = au1x00_pcmcia_poll_event;
* "what chipselect is used". Boards could want more.
*/
-static int __devinit omap_cf_probe(struct device *dev)
+static int __init omap_cf_probe(struct platform_device *pdev)
{
unsigned seg;
struct omap_cf_socket *cf;
- struct platform_device *pdev = to_platform_device(dev);
int irq;
int status;
- seg = (int) dev->platform_data;
+ seg = (int) pdev->dev.platform_data;
if (seg == 0 || seg > 3)
return -ENODEV;
cf->timer.data = (unsigned long) cf;
cf->pdev = pdev;
- dev_set_drvdata(dev, cf);
+ platform_set_drvdata(pdev, cf);
/* this primarily just shuts up irq handling noise */
status = request_irq(irq, omap_cf_irq, IRQF_SHARED,
omap_cf_present() ? "present" : "(not present)");
cf->socket.owner = THIS_MODULE;
- cf->socket.dev.parent = dev;
+ cf->socket.dev.parent = &pdev->dev;
cf->socket.ops = &omap_cf_ops;
cf->socket.resource_ops = &pccard_static_ops;
cf->socket.features = SS_CAP_PCCARD | SS_CAP_STATIC_MAP
return status;
}
-static int __devexit omap_cf_remove(struct device *dev)
+static int __exit omap_cf_remove(struct platform_device *pdev)
{
- struct omap_cf_socket *cf = dev_get_drvdata(dev);
+ struct omap_cf_socket *cf = platform_get_drvdata(pdev);
cf->active = 0;
pcmcia_unregister_socket(&cf->socket);
return 0;
}
-static struct device_driver omap_cf_driver = {
- .name = (char *) driver_name,
- .bus = &platform_bus_type,
- .probe = omap_cf_probe,
- .remove = __devexit_p(omap_cf_remove),
- .suspend = pcmcia_socket_dev_suspend,
- .resume = pcmcia_socket_dev_resume,
+static int omap_cf_suspend(struct platform_device *pdev, pm_message_t mesg)
+{
+ return pcmcia_socket_dev_suspend(&pdev->dev, mesg);
+}
+
+static int omap_cf_resume(struct platform_device *pdev)
+{
+ return pcmcia_socket_dev_resume(&pdev->dev);
+}
+
+static struct platform_driver omap_cf_driver = {
+ .driver = {
+ .name = (char *) driver_name,
+ },
+ .remove = __exit_p(omap_cf_remove),
+ .suspend = omap_cf_suspend,
+ .resume = omap_cf_resume,
};
static int __init omap_cf_init(void)
{
if (cpu_is_omap16xx())
- return driver_register(&omap_cf_driver);
+ return platform_driver_probe(&omap_cf_driver, omap_cf_probe);
return -ENODEV;
}
static void __exit omap_cf_exit(void)
{
if (cpu_is_omap16xx())
- driver_unregister(&omap_cf_driver);
+ platform_driver_unregister(&omap_cf_driver);
}
module_init(omap_cf_init);
return -EINVAL;
if(!pnp_can_configure(dev)) {
- pnp_info("Device %s does not support resource configuration.", dev->dev.bus_id);
+ pnp_dbg("Device %s does not support resource configuration.", dev->dev.bus_id);
return -ENODEV;
}
int pnp_start_dev(struct pnp_dev *dev)
{
if (!pnp_can_write(dev)) {
- pnp_info("Device %s does not support activation.", dev->dev.bus_id);
+ pnp_dbg("Device %s does not support activation.", dev->dev.bus_id);
return -EINVAL;
}
int pnp_stop_dev(struct pnp_dev *dev)
{
if (!pnp_can_disable(dev)) {
- pnp_info("Device %s does not support disabling.", dev->dev.bus_id);
+ pnp_dbg("Device %s does not support disabling.", dev->dev.bus_id);
return -EINVAL;
}
if (dev->protocol->disable(dev)<0) {
{ "", 0 }
};
-static void reserve_range(char *pnpid, int start, int end, int port)
+static void reserve_range(const char *pnpid, resource_size_t start, resource_size_t end, int port)
{
struct resource *res;
char *regionid;
return;
snprintf(regionid, 16, "pnp %s", pnpid);
if (port)
- res = request_region(start,end-start+1,regionid);
+ res = request_region(start, end-start+1, regionid);
else
- res = request_mem_region(start,end-start+1,regionid);
+ res = request_mem_region(start, end-start+1, regionid);
if (res == NULL)
kfree(regionid);
else
* have double reservations.
*/
printk(KERN_INFO
- "pnp: %s: %s range 0x%x-0x%x %s reserved\n",
- pnpid, port ? "ioport" : "iomem", start, end,
+ "pnp: %s: %s range 0x%llx-0x%llx %s reserved\n",
+ pnpid, port ? "ioport" : "iomem",
+ (unsigned long long)start, (unsigned long long)end,
NULL != res ? "has been" : "could not be");
}
-static void reserve_resources_of_dev(struct pnp_dev *dev)
+static void reserve_resources_of_dev(const struct pnp_dev *dev)
{
int i;
#include <linux/reboot.h>
#include <linux/kernel.h>
#include <linux/ioctl.h>
+
+#include <asm/firmware.h>
#include <asm/lv1call.h>
#include <asm/ps3av.h>
#include <asm/ps3.h>
static int ps3av_module_init(void)
{
- int error = ps3_vuart_port_driver_register(&ps3av_driver);
+ int error;
+
+ if (!firmware_has_feature(FW_FEATURE_PS3_LV1))
+ return -ENODEV;
+
+ error = ps3_vuart_port_driver_register(&ps3av_driver);
if (error) {
printk(KERN_ERR
"%s: ps3_vuart_port_driver_register failed %d\n",
static const u32 ps3av_ns_table[][5] = {
/* D1, D2, D3, D4, D5 */
- [PS3AV_CMD_AUDIO_FS_44K-BASE] { 6272, 6272, 17836, 17836, 8918 },
- [PS3AV_CMD_AUDIO_FS_48K-BASE] { 6144, 6144, 11648, 11648, 5824 },
- [PS3AV_CMD_AUDIO_FS_88K-BASE] { 12544, 12544, 35672, 35672, 17836 },
- [PS3AV_CMD_AUDIO_FS_96K-BASE] { 12288, 12288, 23296, 23296, 11648 },
- [PS3AV_CMD_AUDIO_FS_176K-BASE] { 25088, 25088, 71344, 71344, 35672 },
- [PS3AV_CMD_AUDIO_FS_192K-BASE] { 24576, 24576, 46592, 46592, 23296 }
+ [PS3AV_CMD_AUDIO_FS_44K-BASE] = { 6272, 6272, 17836, 17836, 8918 },
+ [PS3AV_CMD_AUDIO_FS_48K-BASE] = { 6144, 6144, 11648, 11648, 5824 },
+ [PS3AV_CMD_AUDIO_FS_88K-BASE] = { 12544, 12544, 35672, 35672, 17836 },
+ [PS3AV_CMD_AUDIO_FS_96K-BASE] = { 12288, 12288, 23296, 23296, 11648 },
+ [PS3AV_CMD_AUDIO_FS_176K-BASE] = { 25088, 25088, 71344, 71344, 35672 },
+ [PS3AV_CMD_AUDIO_FS_192K-BASE] = { 24576, 24576, 46592, 46592, 23296 }
};
static void ps3av_cnv_ns(u8 *ns, u32 fs, u32 video_vid)
#undef BASE
-static u8 ps3av_cnv_enable(u32 source, u8 *enable)
+static u8 ps3av_cnv_enable(u32 source, const u8 *enable)
{
- u8 *p, ret = 0;
+ const u8 *p;
+ u8 ret = 0;
if (source == PS3AV_CMD_AUDIO_SOURCE_SPDIF) {
ret = 0x03;
return ret;
}
-static u8 ps3av_cnv_fifomap(u8 *map)
+static u8 ps3av_cnv_fifomap(const u8 *map)
{
- u8 *p, ret = 0;
+ const u8 *p;
+ u8 ret = 0;
p = map;
ret = p[0] + (p[1] << 2) + (p[2] << 4) + (p[3] << 6);
info->pb5.lsv = mode->audio_downmix_level;
}
-static void ps3av_cnv_chstat(u8 *chstat, u8 *cs_info)
+static void ps3av_cnv_chstat(u8 *chstat, const u8 *cs_info)
{
memcpy(chstat, cs_info, 5);
}
#include <linux/module.h>
#include <linux/workqueue.h>
#include <linux/reboot.h>
+
+#include <asm/firmware.h>
#include <asm/ps3.h>
+
#include "vuart.h"
MODULE_AUTHOR("Sony Corporation");
static int __init ps3_sys_manager_init(void)
{
+ if (!firmware_has_feature(FW_FEATURE_PS3_LV1))
+ return -ENODEV;
+
return ps3_vuart_port_driver_register(&ps3_sys_manager);
}
kfree(dev->priv);
dev->priv = NULL;
fail_alloc:
- vuart_bus_priv.devices[port_number] = 0;
+ vuart_bus_priv.devices[port_number] = NULL;
fail_match:
up(&vuart_bus_priv.probe_mutex);
dev_dbg(&dev->core, "%s:%d failed\n", __func__, __LINE__);
dev_dbg(&dev->core, "%s:%d: %s no remove method\n", __func__,
__LINE__, dev->core.bus_id);
- vuart_bus_priv.devices[dev->priv->port_number] = 0;
+ vuart_bus_priv.devices[dev->priv->port_number] = NULL;
if (--vuart_bus_priv.use_count == 0) {
BUG();
pr_debug("%s:%d:\n", __func__, __LINE__);
if (!firmware_has_feature(FW_FEATURE_PS3_LV1))
- return 0;
+ return -ENODEV;
init_MUTEX(&vuart_bus_priv.probe_mutex);
result = bus_register(&ps3_vuart_bus);
static const char driver_name[] = "rtc_cmos";
+/* The RTC_INTR register may have e.g. RTC_PF set even if RTC_PIE is clear;
+ * always mask it against the irq enable bits in RTC_CONTROL. Bit values
+ * are the same: PF==PIE, AF=AIE, UF=UIE; so RTC_IRQMASK works with both.
+ */
+#define RTC_IRQMASK (RTC_PF | RTC_AF | RTC_UF)
+
+static inline int is_intr(u8 rtc_intr)
+{
+ if (!(rtc_intr & RTC_IRQF))
+ return 0;
+ return rtc_intr & RTC_IRQMASK;
+}
+
/*----------------------------------------------------------------*/
static int cmos_read_time(struct device *dev, struct rtc_time *t)
rtc_control &= ~RTC_AIE;
CMOS_WRITE(rtc_control, RTC_CONTROL);
rtc_intr = CMOS_READ(RTC_INTR_FLAGS);
- if (rtc_intr)
+ rtc_intr &= (rtc_control & RTC_IRQMASK) | RTC_IRQF;
+ if (is_intr(rtc_intr))
rtc_update_irq(&cmos->rtc->class_dev, 1, rtc_intr);
/* update alarm */
rtc_control |= RTC_AIE;
CMOS_WRITE(rtc_control, RTC_CONTROL);
rtc_intr = CMOS_READ(RTC_INTR_FLAGS);
- if (rtc_intr)
+ rtc_intr &= (rtc_control & RTC_IRQMASK) | RTC_IRQF;
+ if (is_intr(rtc_intr))
rtc_update_irq(&cmos->rtc->class_dev, 1, rtc_intr);
}
}
CMOS_WRITE(rtc_control, RTC_CONTROL);
rtc_intr = CMOS_READ(RTC_INTR_FLAGS);
- if (rtc_intr)
+ rtc_intr &= (rtc_control & RTC_IRQMASK) | RTC_IRQF;
+ if (is_intr(rtc_intr))
rtc_update_irq(&cmos->rtc->class_dev, 1, rtc_intr);
spin_unlock_irqrestore(&rtc_lock, flags);
return 0;
spin_lock(&rtc_lock);
irqstat = CMOS_READ(RTC_INTR_FLAGS);
+ irqstat &= (CMOS_READ(RTC_CONTROL) & RTC_IRQMASK) | RTC_IRQF;
spin_unlock(&rtc_lock);
- if (irqstat) {
- /* NOTE: irqstat may have e.g. RTC_PF set
- * even when RTC_PIE is clear...
- */
+ if (is_intr(irqstat)) {
rtc_update_irq(p, 1, irqstat);
return IRQ_HANDLED;
} else
{
struct cmos_rtc *cmos = dev_get_drvdata(dev);
int do_wake = device_may_wakeup(dev);
- unsigned char tmp, irqstat;
+ unsigned char tmp;
/* only the alarm might be a wakeup event source */
spin_lock_irq(&rtc_lock);
cmos->suspend_ctrl = tmp = CMOS_READ(RTC_CONTROL);
if (tmp & (RTC_PIE|RTC_AIE|RTC_UIE)) {
+ unsigned char irqstat;
+
if (do_wake)
tmp &= ~(RTC_PIE|RTC_UIE);
else
tmp &= ~(RTC_PIE|RTC_AIE|RTC_UIE);
CMOS_WRITE(tmp, RTC_CONTROL);
irqstat = CMOS_READ(RTC_INTR_FLAGS);
- } else
- irqstat = 0;
+ irqstat &= (tmp & RTC_IRQMASK) | RTC_IRQF;
+ if (is_intr(irqstat))
+ rtc_update_irq(&cmos->rtc->class_dev, 1, irqstat);
+ }
spin_unlock_irq(&rtc_lock);
- if (irqstat)
- rtc_update_irq(&cmos->rtc->class_dev, 1, irqstat);
-
/* ACPI HOOK: enable ACPI_EVENT_RTC when (tmp & RTC_AIE)
* ... it'd be best if we could do that under rtc_lock.
*/
spin_lock_irq(&rtc_lock);
CMOS_WRITE(tmp, RTC_CONTROL);
tmp = CMOS_READ(RTC_INTR_FLAGS);
- spin_unlock_irq(&rtc_lock);
- if (tmp)
+ tmp &= (cmos->suspend_ctrl & RTC_IRQMASK) | RTC_IRQF;
+ if (is_intr(tmp))
rtc_update_irq(&cmos->rtc->class_dev, 1, tmp);
+ spin_unlock_irq(&rtc_lock);
}
pr_debug("%s: resume, ctrl %02x\n",
/*----------------------------------------------------------------*/
/* The "CMOS" RTC normally lives on the platform_bus. On ACPI systems,
- * the device node may alternatively be created as a PNP device.
+ * the device node will always be created as a PNPACPI device.
*/
#ifdef CONFIG_PNPACPI
/*----------------------------------------------------------------*/
/* Platform setup should have set up an RTC device, when PNPACPI is
- * unavailable ... this is the normal case, common even on PCs.
+ * unavailable ... this could happen even on (older) PCs.
*/
static int __init cmos_platform_probe(struct platform_device *pdev)
* resulting condition code and DIAG return code. */
static inline int dia250(void *iob, int cmd)
{
- register unsigned long reg0 asm ("0") = (unsigned long) iob;
+ register unsigned long reg2 asm ("2") = (unsigned long) iob;
typedef union {
struct dasd_diag_init_io init_io;
struct dasd_diag_rw_io rw_io;
rc = 3;
asm volatile(
- " diag 0,%2,0x250\n"
+ " diag 2,%2,0x250\n"
"0: ipm %0\n"
" srl %0,28\n"
- " or %0,1\n"
+ " or %0,3\n"
"1:\n"
EX_TABLE(0b,1b)
: "+d" (rc), "=m" (*(addr_type *) iob)
- : "d" (cmd), "d" (reg0), "m" (*(addr_type *) iob)
- : "1", "cc");
+ : "d" (cmd), "d" (reg2), "m" (*(addr_type *) iob)
+ : "3", "cc");
return rc;
}
* Provide an 'ungroup' attribute so the user can remove group devices no
* longer needed or accidentially created. Saves memory :)
*/
+static void ccwgroup_ungroup_callback(struct device *dev)
+{
+ struct ccwgroup_device *gdev = to_ccwgroupdev(dev);
+
+ __ccwgroup_remove_symlinks(gdev);
+ device_unregister(dev);
+}
+
static ssize_t
ccwgroup_ungroup_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
{
struct ccwgroup_device *gdev;
+ int rc;
gdev = to_ccwgroupdev(dev);
if (gdev->state != CCWGROUP_OFFLINE)
return -EINVAL;
- __ccwgroup_remove_symlinks(gdev);
- device_unregister(dev);
-
+ /* Note that we cannot unregister the device from one of its
+ * attribute methods, so we have to use this roundabout approach.
+ */
+ rc = device_schedule_callback(dev, ccwgroup_ungroup_callback);
+ if (rc)
+ count = rc;
return count;
}
cdev_irb = &cdev->private->irb;
+ /*
+ * If the clear function had been performed, all formerly pending
+ * status at the subchannel has been cleared and we must not pass
+ * intermediate accumulated status to the device driver.
+ */
+ if (irb->scsw.fctl & SCSW_FCTL_CLEAR_FUNC)
+ memset(&cdev->private->irb, 0, sizeof(struct irb));
+
/* Copy bits which are valid only for the start function. */
if (irb->scsw.fctl & SCSW_FCTL_START_FUNC) {
/* Copy key. */
cdev_irb->scsw.cpa = irb->scsw.cpa;
/* Accumulate device status, but not the device busy flag. */
cdev_irb->scsw.dstat &= ~DEV_STAT_BUSY;
- cdev_irb->scsw.dstat |= irb->scsw.dstat;
+ /* dstat is not always valid. */
+ if (irb->scsw.stctl &
+ (SCSW_STCTL_PRIM_STATUS | SCSW_STCTL_SEC_STATUS
+ | SCSW_STCTL_INTER_STATUS | SCSW_STCTL_ALERT_STATUS))
+ cdev_irb->scsw.dstat |= irb->scsw.dstat;
/* Accumulate subchannel status. */
cdev_irb->scsw.cstat |= irb->scsw.cstat;
/* Copy residual count if it is valid. */
goto again;
}
if (rc < 0) {
- QDIO_DBF_TEXT3(1,trace,"sqberr");
- sprintf(dbf_text,"%2x,%2x,%d,%d",tmp_cnt,*cnt,ccq,q_no);
- QDIO_DBF_TEXT3(1,trace,dbf_text);
+ QDIO_DBF_TEXT3(1,trace,"sqberr");
+ sprintf(dbf_text,"%2x,%2x",tmp_cnt,*cnt);
+ QDIO_DBF_TEXT3(1,trace,dbf_text);
+ sprintf(dbf_text,"%d,%d",ccq,q_no);
+ QDIO_DBF_TEXT3(1,trace,dbf_text);
q->handler(q->cdev,QDIO_STATUS_ACTIVATE_CHECK_CONDITION|
QDIO_STATUS_LOOK_FOR_ERROR,
0, 0, 0, -1, -1, q->int_parm);
if (!no_used) {
QDIO_DBF_TEXT4(0,trace,"inqisdnA");
QDIO_DBF_HEX4(0,trace,&q,sizeof(void*));
- QDIO_DBF_TEXT4(0,trace,dbf_text);
return 1;
}
if (irq->is_qebsm) {
unsigned int count, struct qdio_buffer *buffers)
{
struct qdio_irq *irq = (struct qdio_irq *) q->irq_ptr;
+ int tmp = 0;
+
qidx &= (QDIO_MAX_BUFFERS_PER_Q - 1);
if (irq->is_qebsm) {
- while (count)
- set_slsb(q, &qidx, SLSB_CU_INPUT_EMPTY, &count);
+ while (count) {
+ tmp = set_slsb(q, &qidx, SLSB_CU_INPUT_EMPTY, &count);
+ if (!tmp)
+ return;
+ }
return;
}
for (;;) {
unsigned int count, struct qdio_buffer *buffers)
{
struct qdio_irq *irq = (struct qdio_irq *) q->irq_ptr;
+ int tmp = 0;
qidx &= (QDIO_MAX_BUFFERS_PER_Q - 1);
if (irq->is_qebsm) {
- while (count)
- set_slsb(q, &qidx, SLSB_CU_OUTPUT_PRIMED, &count);
+ while (count) {
+ tmp = set_slsb(q, &qidx, SLSB_CU_OUTPUT_PRIMED, &count);
+ if (!tmp)
+ return;
+ }
return;
}
MODULE_PARM_DESC(poll_thread, "Turn on/off poll thread, default is 1 (on).");
static struct device *ap_root_device = NULL;
+static DEFINE_SPINLOCK(ap_device_lock);
+static LIST_HEAD(ap_device_list);
/**
* Workqueue & timer for bus rescan.
int rc;
ap_dev->drv = ap_drv;
+ spin_lock_bh(&ap_device_lock);
+ list_add(&ap_dev->list, &ap_device_list);
+ spin_unlock_bh(&ap_device_lock);
rc = ap_drv->probe ? ap_drv->probe(ap_dev) : -ENODEV;
return rc;
}
ap_flush_queue(ap_dev);
if (ap_drv->remove)
ap_drv->remove(ap_dev);
+ spin_lock_bh(&ap_device_lock);
+ list_del_init(&ap_dev->list);
+ spin_unlock_bh(&ap_device_lock);
+ spin_lock_bh(&ap_dev->lock);
+ atomic_sub(ap_dev->queue_count, &ap_poll_requests);
+ spin_unlock_bh(&ap_dev->lock);
return 0;
}
(void *)(unsigned long)qid,
__ap_scan_bus);
rc = ap_query_queue(qid, &queue_depth, &device_type);
- if (dev && rc) {
- put_device(dev);
- device_unregister(dev);
- continue;
+ if (dev) {
+ ap_dev = to_ap_dev(dev);
+ spin_lock_bh(&ap_dev->lock);
+ if (rc || ap_dev->unregistered) {
+ spin_unlock_bh(&ap_dev->lock);
+ put_device(dev);
+ device_unregister(dev);
+ continue;
+ } else
+ spin_unlock_bh(&ap_dev->lock);
}
if (dev) {
put_device(dev);
spin_lock_init(&ap_dev->lock);
INIT_LIST_HEAD(&ap_dev->pendingq);
INIT_LIST_HEAD(&ap_dev->requestq);
+ INIT_LIST_HEAD(&ap_dev->list);
if (device_type == 0)
ap_probe_device_type(ap_dev);
else
case AP_RESPONSE_NO_PENDING_REPLY:
if (status.queue_empty) {
/* The card shouldn't forget requests but who knows. */
+ atomic_sub(ap_dev->queue_count, &ap_poll_requests);
ap_dev->queue_count = 0;
list_splice_init(&ap_dev->pendingq, &ap_dev->requestq);
ap_dev->requestq_count += ap_dev->pendingq_count;
ap_dev->unregistered = 1;
} else {
ap_dev->drv->receive(ap_dev, ap_msg, ERR_PTR(-ENODEV));
- rc = 0;
+ rc = -ENODEV;
}
spin_unlock_bh(&ap_dev->lock);
if (rc == -ENODEV)
* polling until bit 2^0 of the control flags is not set. If bit 2^1
* of the control flags has been set arm the poll timer.
*/
-static int __ap_poll_all(struct device *dev, void *data)
+static int __ap_poll_all(struct ap_device *ap_dev, unsigned long *flags)
{
- struct ap_device *ap_dev = to_ap_dev(dev);
- int rc;
-
spin_lock(&ap_dev->lock);
if (!ap_dev->unregistered) {
- rc = ap_poll_queue(to_ap_dev(dev), (unsigned long *) data);
- if (rc)
+ if (ap_poll_queue(ap_dev, flags))
ap_dev->unregistered = 1;
- } else
- rc = 0;
+ }
spin_unlock(&ap_dev->lock);
- if (rc)
- device_unregister(&ap_dev->device);
return 0;
}
static void ap_poll_all(unsigned long dummy)
{
unsigned long flags;
+ struct ap_device *ap_dev;
do {
flags = 0;
- bus_for_each_dev(&ap_bus_type, NULL, &flags, __ap_poll_all);
+ spin_lock(&ap_device_lock);
+ list_for_each_entry(ap_dev, &ap_device_list, list) {
+ __ap_poll_all(ap_dev, &flags);
+ }
+ spin_unlock(&ap_device_lock);
} while (flags & 1);
if (flags & 2)
ap_schedule_poll_timer();
DECLARE_WAITQUEUE(wait, current);
unsigned long flags;
int requests;
+ struct ap_device *ap_dev;
set_user_nice(current, 19);
while (1) {
set_current_state(TASK_RUNNING);
remove_wait_queue(&ap_poll_wait, &wait);
- local_bh_disable();
flags = 0;
- bus_for_each_dev(&ap_bus_type, NULL, &flags, __ap_poll_all);
- local_bh_enable();
+ spin_lock_bh(&ap_device_lock);
+ list_for_each_entry(ap_dev, &ap_device_list, list) {
+ __ap_poll_all(ap_dev, &flags);
+ }
+ spin_unlock_bh(&ap_device_lock);
}
set_current_state(TASK_RUNNING);
remove_wait_queue(&ap_poll_wait, &wait);
struct device device;
struct ap_driver *drv; /* Pointer to AP device driver. */
spinlock_t lock; /* Per device lock. */
+ struct list_head list; /* private list of all AP devices. */
ap_qid_t qid; /* AP queue id. */
int queue_depth; /* AP queue depth.*/
get_device(&zdev->ap_dev->device);
zdev->request_count++;
__zcrypt_decrease_preference(zdev);
- spin_unlock_bh(&zcrypt_device_lock);
if (try_module_get(zdev->ap_dev->drv->driver.owner)) {
+ spin_unlock_bh(&zcrypt_device_lock);
rc = zdev->ops->rsa_modexpo(zdev, mex);
+ spin_lock_bh(&zcrypt_device_lock);
module_put(zdev->ap_dev->drv->driver.owner);
}
else
rc = -EAGAIN;
- spin_lock_bh(&zcrypt_device_lock);
zdev->request_count--;
__zcrypt_increase_preference(zdev);
put_device(&zdev->ap_dev->device);
get_device(&zdev->ap_dev->device);
zdev->request_count++;
__zcrypt_decrease_preference(zdev);
- spin_unlock_bh(&zcrypt_device_lock);
if (try_module_get(zdev->ap_dev->drv->driver.owner)) {
+ spin_unlock_bh(&zcrypt_device_lock);
rc = zdev->ops->rsa_modexpo_crt(zdev, crt);
+ spin_lock_bh(&zcrypt_device_lock);
module_put(zdev->ap_dev->drv->driver.owner);
}
else
rc = -EAGAIN;
- spin_lock_bh(&zcrypt_device_lock);
zdev->request_count--;
__zcrypt_increase_preference(zdev);
put_device(&zdev->ap_dev->device);
get_device(&zdev->ap_dev->device);
zdev->request_count++;
__zcrypt_decrease_preference(zdev);
- spin_unlock_bh(&zcrypt_device_lock);
if (try_module_get(zdev->ap_dev->drv->driver.owner)) {
+ spin_unlock_bh(&zcrypt_device_lock);
rc = zdev->ops->send_cprb(zdev, xcRB);
+ spin_lock_bh(&zcrypt_device_lock);
module_put(zdev->ap_dev->drv->driver.owner);
}
else
rc = -EAGAIN;
- spin_lock_bh(&zcrypt_device_lock);
zdev->request_count--;
__zcrypt_increase_preference(zdev);
put_device(&zdev->ap_dev->device);
}
static inline struct sk_buff *
-qeth_pskb_unshare(struct sk_buff *skb, int pri)
+qeth_pskb_unshare(struct sk_buff *skb, gfp_t pri)
{
struct sk_buff *nskb;
if (!skb_cloned(skb))
#ifdef CONFIG_PCI
struct pci_dev *pdev;
struct pcidev_cookie *pcp;
- pdev = pci_find_slot (((int *) op->oprom_array)[0],
+ pdev = pci_get_bus_and_slot (((int *) op->oprom_array)[0],
((int *) op->oprom_array)[1]);
pcp = pdev->sysdata;
op->oprom_size = sizeof(int);
err = copyout(argp, op, bufsize + sizeof(int));
}
+ pci_dev_put(pdev);
#endif
}
if (copy_from_user(&inout, argp, sizeof(inout)))
return -EFAULT;
- buffer = kmalloc(inout.len, GFP_KERNEL);
+ buffer = kzalloc(inout.len, GFP_KERNEL);
if (buffer == NULL)
return -ENOMEM;
- memset(buffer,0,inout.len);
vfc_lock_device(dev);
inout.ret=
vfc_i2c_recvbuf(dev,inout.addr & 0xff
/* This function will handle the request sense scsi command */
static int tw_scsiop_request_sense(TW_Device_Extension *tw_dev, int request_id)
{
+ char request_buffer[18];
+
dprintk(KERN_NOTICE "3w-xxxx: tw_scsiop_request_sense()\n");
- /* For now we just zero the request buffer */
- memset(tw_dev->srb[request_id]->request_buffer, 0, tw_dev->srb[request_id]->request_bufflen);
+ memset(request_buffer, 0, sizeof(request_buffer));
+ request_buffer[0] = 0x70; /* Immediate fixed format */
+ request_buffer[7] = 10; /* minimum size per SPC: 18 bytes */
+ /* leave all other fields zero, giving effectively NO_SENSE return */
+ tw_transfer_internal(tw_dev, request_id, request_buffer,
+ sizeof(request_buffer));
+
tw_dev->state[request_id] = TW_S_COMPLETED;
tw_state_request_finish(tw_dev, request_id);
cmdp->u.raw64.direction =
gdth_direction_tab[scp->cmnd[0]]==DOU ? GDTH_DATA_OUT:GDTH_DATA_IN;
memcpy(cmdp->u.raw64.cmd,scp->cmnd,16);
+ cmdp->u.raw64.sg_ranz = 0;
} else {
cmdp->u.raw.reserved = 0;
cmdp->u.raw.mdisc_time = 0;
cmdp->u.raw.direction =
gdth_direction_tab[scp->cmnd[0]]==DOU ? GDTH_DATA_OUT:GDTH_DATA_IN;
memcpy(cmdp->u.raw.cmd,scp->cmnd,12);
+ cmdp->u.raw.sg_ranz = 0;
}
if (scp->use_sg) {
struct lpfc_sli *psli = &phba->sli;
struct lpfc_sli_ring *pring;
- if (state == pci_channel_io_perm_failure) {
- lpfc_pci_remove_one(pdev);
+ if (state == pci_channel_io_perm_failure)
return PCI_ERS_RESULT_DISCONNECT;
- }
+
pci_disable_device(pdev);
/*
* There may be I/Os dropped by the firmware.
(struct scatterlist *)Cmnd->request_buffer,
Cmnd->use_sg,
Cmnd->sc_data_direction);
- } else {
+ } else if (Cmnd->request_bufflen) {
sbus_unmap_single(qpti->sdev,
(__u32)((unsigned long)Cmnd->SCp.ptr),
Cmnd->request_bufflen,
*/
if (copy_sense) {
if (!SCSI_SENSE_VALID(scmd)) {
- memcpy(scmd->sense_buffer, scmd->request_buffer,
+ memcpy(scmd->sense_buffer, page_address(sgl.page),
sizeof(scmd->sense_buffer));
}
__free_page(sgl.page);
}
static DEVICE_ATTR(rescan, S_IWUSR, NULL, store_rescan_field);
+static void sdev_store_delete_callback(struct device *dev)
+{
+ scsi_remove_device(to_scsi_device(dev));
+}
+
static ssize_t sdev_store_delete(struct device *dev, struct device_attribute *attr, const char *buf,
size_t count)
{
- scsi_remove_device(to_scsi_device(dev));
+ int rc;
+
+ /* An attribute cannot be unregistered by one of its own methods,
+ * so we have to use this roundabout approach.
+ */
+ rc = device_schedule_callback(dev, sdev_store_delete_callback);
+ if (rc)
+ count = rc;
return count;
};
static DEVICE_ATTR(delete, S_IWUSR, NULL, sdev_store_delete);
{
unsigned int status = serial_in(up, UART_MSR);
- if (status & UART_MSR_ANY_DELTA && up->ier & UART_IER_MSI) {
+ if (status & UART_MSR_ANY_DELTA && up->ier & UART_IER_MSI &&
+ up->port.info != NULL) {
if (status & UART_MSR_TERI)
up->port.icount.rng++;
if (status & UART_MSR_DDSR)
serial8250_handle_port(struct uart_8250_port *up)
{
unsigned int status;
+ unsigned long flags;
- spin_lock(&up->port.lock);
+ spin_lock_irqsave(&up->port.lock, flags);
status = serial_inp(up, UART_LSR);
if (status & UART_LSR_THRE)
transmit_chars(up);
- spin_unlock(&up->port.lock);
+ spin_unlock_irqrestore(&up->port.lock, flags);
}
/*
{ "FUJ02B8", 0 },
{ "FUJ02B9", 0 },
{ "FUJ02BC", 0 },
+ /* Fujitsu Wacom Tablet PC devices */
+ { "FUJ02E5", 0 },
+ { "FUJ02E6", 0 },
/* Rockwell's (PORALiNK) 33600 INT PNP */
{ "WCI0003", 0 },
/* Unkown PnP modems */
}
}
-static int __init get_port_memory(struct icom_port *icom_port)
+static int __devinit get_port_memory(struct icom_port *icom_port)
{
int index;
unsigned long stgAddr;
0x8024 + 2 - 2 * (icom_port->port - 2);
}
}
-static int __init icom_load_ports(struct icom_adapter *icom_adapter)
+static int __devinit icom_load_ports(struct icom_adapter *icom_adapter)
{
struct icom_port *icom_port;
int port_num;
}
}
- free_irq(icom_adapter->irq_number, (void *) icom_adapter);
+ free_irq(icom_adapter->pci_dev->irq, (void *) icom_adapter);
iounmap(icom_adapter->base_addr);
icom_free_adapter(icom_adapter);
pci_release_regions(icom_adapter->pci_dev);
}
icom_adapter->base_addr_pci = pci_resource_start(dev, 0);
- icom_adapter->irq_number = dev->irq;
icom_adapter->pci_dev = dev;
icom_adapter->version = ent->driver_data;
icom_adapter->subsystem_id = ent->subdevice;
icom_port = &icom_adapter->port_info[index];
if (icom_port->status == ICOM_PORT_ACTIVE) {
- icom_port->uart_port.irq = icom_port->adapter->irq_number;
+ icom_port->uart_port.irq = icom_port->adapter->pci_dev->irq;
icom_port->uart_port.type = PORT_ICOM;
icom_port->uart_port.iotype = UPIO_MEM;
icom_port->uart_port.membase =
struct icom_adapter {
void __iomem * base_addr;
unsigned long base_addr_pci;
- unsigned char irq_number;
struct pci_dev *pci_dev;
struct icom_port port_info[4];
int index;
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
+#if defined(CONFIG_SERIAL_SH_SCI_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
#undef DEBUG
#endif
#include <asm/sci.h>
-
-#if defined(CONFIG_SERIAL_SH_SCI_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
-#define SUPPORT_SYSRQ
-#endif
-
#include "sh-sci.h"
struct sci_port {
struct tty_struct *tty = port->info->tty;
struct sci_port *s = &sci_ports[port->line];
+ if (uart_handle_break(port))
+ return 0;
+
if (!s->break_flag && status & SCxSR_BRK(port)) {
#if defined(CONFIG_CPU_SH3)
/* Debounce break */
*/
sr = spi_w8r8(spi, AT25_RDSR);
if (sr < 0 || sr & AT25_SR_nRDY) {
- dev_dbg(&at25->spi->dev, "rdsr --> %d (%02x)\n", sr, sr);
+ dev_dbg(&spi->dev, "rdsr --> %d (%02x)\n", sr, sr);
err = -ENXIO;
goto fail;
}
if (ret)
return ret;
spi->controller_state = (void *)npcs_pin;
- gpio_direction_output(npcs_pin);
+ gpio_direction_output(npcs_pin, !(spi->mode & SPI_CS_HIGH));
}
dev_dbg(&spi->dev,
* this is exported so that for example a USB or parport based adapter
* driver could add devices (which it would learn about out-of-band).
*/
-struct spi_device *__init_or_module
-spi_new_device(struct spi_master *master, struct spi_board_info *chip)
+struct spi_device *spi_new_device(struct spi_master *master,
+ struct spi_board_info *chip)
{
struct spi_device *proxy;
struct device *dev = master->cdev.dev;
* the master's methods before calling spi_register_master(); and (after errors
* adding the device) calling spi_master_put() to prevent a memory leak.
*/
-struct spi_master * __init_or_module
-spi_alloc_master(struct device *dev, unsigned size)
+struct spi_master *spi_alloc_master(struct device *dev, unsigned size)
{
struct spi_master *master;
* After a successful return, the caller is responsible for calling
* spi_unregister_master().
*/
-int __init_or_module
-spi_register_master(struct spi_master *master)
+int spi_register_master(struct spi_master *master)
{
static atomic_t dyn_bus_id = ATOMIC_INIT((1<<16) - 1);
struct device *dev = master->cdev.dev;
setup_transfer = NULL;
list_for_each_entry (t, &m->transfers, transfer_list) {
- if (bitbang->shutdown) {
- status = -ESHUTDOWN;
- break;
- }
/* override or restore speed and wordsize */
if (t->speed_hz || t->bits_per_word) {
m->status = -EINPROGRESS;
bitbang = spi_master_get_devdata(spi->master);
- if (bitbang->shutdown)
- return -ESHUTDOWN;
spin_lock_irqsave(&bitbang->lock, flags);
if (!spi->max_speed_hz)
*/
int spi_bitbang_stop(struct spi_bitbang *bitbang)
{
- unsigned limit = 500;
-
- spin_lock_irq(&bitbang->lock);
- bitbang->shutdown = 0;
- while (!list_empty(&bitbang->queue) && limit--) {
- spin_unlock_irq(&bitbang->lock);
+ spi_unregister_master(bitbang->master);
- dev_dbg(bitbang->master->cdev.dev, "wait for queue\n");
- msleep(10);
-
- spin_lock_irq(&bitbang->lock);
- }
- spin_unlock_irq(&bitbang->lock);
- if (!list_empty(&bitbang->queue)) {
- dev_err(bitbang->master->cdev.dev, "queue didn't empty\n");
- return -EBUSY;
- }
+ WARN_ON(!list_empty(&bitbang->queue));
destroy_workqueue(bitbang->workqueue);
- spi_unregister_master(bitbang->master);
-
return 0;
}
EXPORT_SYMBOL_GPL(spi_bitbang_stop);
int len;
int count;
- int (*set_cs)(struct s3c2410_spi_info *spi,
+ void (*set_cs)(struct s3c2410_spi_info *spi,
int cs, int pol);
/* data buffers */
switch (value) {
case BITBANG_CS_INACTIVE:
- hw->pdata->set_cs(hw->pdata, spi->chip_select, cspol^1);
+ hw->set_cs(hw->pdata, spi->chip_select, cspol^1);
break;
case BITBANG_CS_ACTIVE:
/* write new configration */
writeb(spcon, hw->regs + S3C2410_SPCON);
- hw->pdata->set_cs(hw->pdata, spi->chip_select, cspol);
+ hw->set_cs(hw->pdata, spi->chip_select, cspol);
break;
}
#include <asm/dec/machtype.h>
#include <asm/dec/serial.h>
#include <asm/dec/system.h>
-#include <asm/dec/tc.h>
#ifdef CONFIG_KGDB
#include <asm/kgdb.h>
#define USBLP_QUIRK_BIDIR 0x1 /* reports bidir but requires unidirectional mode (no INs/reads) */
#define USBLP_QUIRK_USB_INIT 0x2 /* needs vendor USB init string */
+#define USBLP_QUIRK_BAD_CLASS 0x4 /* descriptor uses vendor-specific Class or SubClass */
static const struct quirk_printer_struct quirk_printers[] = {
{ 0x03f0, 0x0004, USBLP_QUIRK_BIDIR }, /* HP DeskJet 895C */
{ 0x0409, 0xf0be, USBLP_QUIRK_BIDIR }, /* NEC Picty920 (HP OEM) */
{ 0x0409, 0xf1be, USBLP_QUIRK_BIDIR }, /* NEC Picty800 (HP OEM) */
{ 0x0482, 0x0010, USBLP_QUIRK_BIDIR }, /* Kyocera Mita FS 820, by zut <kernel@zut.de> */
+ { 0x04b8, 0x0202, USBLP_QUIRK_BAD_CLASS }, /* Seiko Epson Receipt Printer M129C */
{ 0, 0 }
};
ifd = &if_alt->altsetting[i];
if (ifd->desc.bInterfaceClass != 7 || ifd->desc.bInterfaceSubClass != 1)
- continue;
+ if (!(usblp->quirks & USBLP_QUIRK_BAD_CLASS))
+ continue;
if (ifd->desc.bInterfaceProtocol < USBLP_FIRST_PROTOCOL ||
ifd->desc.bInterfaceProtocol > USBLP_LAST_PROTOCOL)
{ USB_INTERFACE_INFO(7, 1, 1) },
{ USB_INTERFACE_INFO(7, 1, 2) },
{ USB_INTERFACE_INFO(7, 1, 3) },
+ { USB_DEVICE(0x04b8, 0x0202) }, /* Seiko Epson Receipt Printer M129C */
{ } /* Terminating entry */
};
static const struct usb_device_id usb_quirk_list[] = {
/* HP 5300/5370C scanner */
{ USB_DEVICE(0x03f0, 0x0701), .driver_info = USB_QUIRK_STRING_FETCH_255 },
-
+ /* Seiko Epson Corp - Perfection 1670 */
+ { USB_DEVICE(0x04b8, 0x011f), .driver_info = USB_QUIRK_NO_AUTOSUSPEND },
/* Elsa MicroLink 56k (V.250) */
{ USB_DEVICE(0x05cc, 0x2267), .driver_info = USB_QUIRK_NO_AUTOSUSPEND },
/*-------------------------------------------------------------------------*/
+/*
+ * dma-coherent memory allocation (for dma-capable endpoints)
+ *
+ * NOTE: the dma_*_coherent() API calls suck. Most implementations are
+ * (a) page-oriented, so small buffers lose big; and (b) asymmetric with
+ * respect to calls with irqs disabled: alloc is safe, free is not.
+ * We currently work around (b), but not (a).
+ */
+
static void *
omap_alloc_buffer(
struct usb_ep *_ep,
void *retval;
struct omap_ep *ep;
+ if (!_ep)
+ return NULL;
+
ep = container_of(_ep, struct omap_ep, ep);
if (use_dma && ep->has_dma) {
static int warned;
return retval;
}
+static DEFINE_SPINLOCK(buflock);
+static LIST_HEAD(buffers);
+
+struct free_record {
+ struct list_head list;
+ struct device *dev;
+ unsigned bytes;
+ dma_addr_t dma;
+};
+
+static void do_free(unsigned long ignored)
+{
+ spin_lock_irq(&buflock);
+ while (!list_empty(&buffers)) {
+ struct free_record *buf;
+
+ buf = list_entry(buffers.next, struct free_record, list);
+ list_del(&buf->list);
+ spin_unlock_irq(&buflock);
+
+ dma_free_coherent(buf->dev, buf->bytes, buf, buf->dma);
+
+ spin_lock_irq(&buflock);
+ }
+ spin_unlock_irq(&buflock);
+}
+
+static DECLARE_TASKLET(deferred_free, do_free, 0);
+
static void omap_free_buffer(
struct usb_ep *_ep,
void *buf,
unsigned bytes
)
{
- struct omap_ep *ep;
+ if (!_ep) {
+ WARN_ON(1);
+ return;
+ }
- ep = container_of(_ep, struct omap_ep, ep);
- if (use_dma && _ep && ep->has_dma)
- dma_free_coherent(ep->udc->gadget.dev.parent, bytes, buf, dma);
- else
- kfree (buf);
+ /* free memory into the right allocator */
+ if (dma != DMA_ADDR_INVALID) {
+ struct omap_ep *ep;
+ struct free_record *rec = buf;
+ unsigned long flags;
+
+ ep = container_of(_ep, struct omap_ep, ep);
+
+ rec->dev = ep->udc->gadget.dev.parent;
+ rec->bytes = bytes;
+ rec->dma = dma;
+
+ spin_lock_irqsave(&buflock, flags);
+ list_add_tail(&rec->list, &buffers);
+ tasklet_schedule(&deferred_free);
+ spin_unlock_irqrestore(&buflock, flags);
+ } else
+ kfree(buf);
}
/*-------------------------------------------------------------------------*/
udc->ep0_pending = 0;
break;
case USB_REQ_GET_STATUS:
+ /* USB_ENDPOINT_HALT status? */
+ if (u.r.bRequestType != (USB_DIR_IN|USB_RECIP_ENDPOINT))
+ goto intf_status;
+
+ /* ep0 never stalls */
+ if (!(w_index & 0xf))
+ goto zero_status;
+
+ /* only active endpoints count */
+ ep = &udc->ep[w_index & 0xf];
+ if (w_index & USB_DIR_IN)
+ ep += 16;
+ if (!ep->desc)
+ goto do_stall;
+
+ /* iso never stalls */
+ if (ep->bmAttributes == USB_ENDPOINT_XFER_ISOC)
+ goto zero_status;
+
+ /* FIXME don't assume non-halted endpoints!! */
+ ERR("%s status, can't report\n", ep->ep.name);
+ goto do_stall;
+
+intf_status:
/* return interface status. if we were pedantic,
* we'd detect non-existent interfaces, and stall.
*/
if (u.r.bRequestType
!= (USB_DIR_IN|USB_RECIP_INTERFACE))
goto delegate;
+
+zero_status:
/* return two zero bytes */
UDC_EP_NUM_REG = UDC_EP_SEL|UDC_EP_DIR;
UDC_DATA_REG = 0;
/*-------------------------------------------------------------------------*/
-static inline int machine_needs_vbus_session(void)
+static inline int machine_without_vbus_sense(void)
{
return (machine_is_omap_innovator()
|| machine_is_omap_osk()
/* boards that don't have VBUS sensing can't autogate 48MHz;
* can't enter deep sleep while a gadget driver is active.
*/
- if (machine_needs_vbus_session())
+ if (machine_without_vbus_sense())
omap_vbus_session(&udc->gadget, 1);
done:
if (udc->dc_clk != NULL)
omap_udc_enable_clock(1);
- if (machine_needs_vbus_session())
+ if (machine_without_vbus_sense())
omap_vbus_session(&udc->gadget, 0);
if (udc->transceiver)
hmc = HMC_1510;
type = "(unknown)";
- if (machine_is_omap_innovator() || machine_is_sx1()) {
+ if (machine_without_vbus_sense()) {
/* just set up software VBUS detect, and then
* later rig it so we always report VBUS.
* FIXME without really sensing VBUS, we can't
*/
ehci->reset_done [i] = jiffies + msecs_to_jiffies (20);
ehci_dbg (ehci, "port %d remote wakeup\n", i + 1);
+ mod_timer(&hcd->rh_timer, ehci->reset_done[i]);
}
}
return out - buf;
}
-static int uhci_show_qh(struct uhci_qh *qh, char *buf, int len, int space)
+static int uhci_show_qh(struct uhci_hcd *uhci,
+ struct uhci_qh *qh, char *buf, int len, int space)
{
char *out = buf;
int i, nurbs;
if (list_empty(&qh->queue)) {
out += sprintf(out, "%*s queue is empty\n", space, "");
+ if (qh == uhci->skel_async_qh)
+ out += uhci_show_td(uhci->term_td, out,
+ len - (out - buf), 0);
} else {
struct urb_priv *urbp = list_entry(qh->queue.next,
struct urb_priv, node);
struct list_head *tmp, *head;
int nframes, nerrs;
__le32 link;
+ __le32 fsbr_link;
static const char * const qh_names[] = {
"unlink", "iso", "int128", "int64", "int32", "int16",
out += sprintf(out, "Skeleton QHs\n");
+ fsbr_link = 0;
for (i = 0; i < UHCI_NUM_SKELQH; ++i) {
int cnt = 0;
- __le32 fsbr_link = 0;
qh = uhci->skelqh[i];
out += sprintf(out, "- skel_%s_qh\n", qh_names[i]); \
- out += uhci_show_qh(qh, out, len - (out - buf), 4);
+ out += uhci_show_qh(uhci, qh, out, len - (out - buf), 4);
/* Last QH is the Terminating QH, it's different */
if (i == SKEL_TERM) {
if (qh_element(qh) != LINK_TO_TD(uhci->term_td))
out += sprintf(out, " skel_term_qh element is not set to term_td!\n");
- if (link == LINK_TO_QH(uhci->skel_term_qh))
- goto check_qh_link;
- continue;
+ link = fsbr_link;
+ if (!link)
+ link = LINK_TO_QH(uhci->skel_term_qh);
+ goto check_qh_link;
}
head = &qh->node;
qh = list_entry(tmp, struct uhci_qh, node);
tmp = tmp->next;
if (++cnt <= 10)
- out += uhci_show_qh(qh, out,
+ out += uhci_show_qh(uhci, qh, out,
len - (out - buf), 4);
if (!fsbr_link && qh->skel >= SKEL_FSBR)
fsbr_link = LINK_TO_QH(qh);
link = LINK_TO_QH(uhci->skel_async_qh);
else if (!uhci->fsbr_is_on)
;
- else if (fsbr_link)
- link = fsbr_link;
else
link = LINK_TO_QH(uhci->skel_term_qh);
check_qh_link:
static inline void lprintk(char *buf)
{}
-static inline int uhci_show_qh(struct uhci_qh *qh, char *buf,
- int len, int space)
+static inline int uhci_show_qh(struct uhci_hcd *uhci,
+ struct uhci_qh *qh, char *buf, int len, int space)
{
return 0;
}
*/
for (i = SKEL_ISO + 1; i < SKEL_ASYNC; ++i)
uhci->skelqh[i]->link = LINK_TO_QH(uhci->skel_async_qh);
- uhci->skel_async_qh->link = uhci->skel_term_qh->link = UHCI_PTR_TERM;
+ uhci->skel_async_qh->link = UHCI_PTR_TERM;
+ uhci->skel_term_qh->link = LINK_TO_QH(uhci->skel_term_qh);
/* This dummy TD is to work around a bug in Intel PIIX controllers */
uhci_fill_td(uhci->term_td, 0, uhci_explen(0) |
*/
static void uhci_fsbr_on(struct uhci_hcd *uhci)
{
- struct uhci_qh *fsbr_qh, *lqh, *tqh;
+ struct uhci_qh *lqh;
+ /* The terminating skeleton QH always points back to the first
+ * FSBR QH. Make the last async QH point to the terminating
+ * skeleton QH. */
uhci->fsbr_is_on = 1;
lqh = list_entry(uhci->skel_async_qh->node.prev,
struct uhci_qh, node);
-
- /* Find the first FSBR QH. Linear search through the list is
- * acceptable because normally FSBR gets turned on as soon as
- * one QH needs it. */
- fsbr_qh = NULL;
- list_for_each_entry_reverse(tqh, &uhci->skel_async_qh->node, node) {
- if (tqh->skel < SKEL_FSBR)
- break;
- fsbr_qh = tqh;
- }
-
- /* No FSBR QH means we must insert the terminating skeleton QH */
- if (!fsbr_qh) {
- uhci->skel_term_qh->link = LINK_TO_QH(uhci->skel_term_qh);
- wmb();
- lqh->link = uhci->skel_term_qh->link;
-
- /* Otherwise loop the last QH to the first FSBR QH */
- } else
- lqh->link = LINK_TO_QH(fsbr_qh);
+ lqh->link = LINK_TO_QH(uhci->skel_term_qh);
}
static void uhci_fsbr_off(struct uhci_hcd *uhci)
{
struct uhci_qh *lqh;
+ /* Remove the link from the last async QH to the terminating
+ * skeleton QH. */
uhci->fsbr_is_on = 0;
lqh = list_entry(uhci->skel_async_qh->node.prev,
struct uhci_qh, node);
-
- /* End the async list normally and unlink the terminating QH */
- lqh->link = uhci->skel_term_qh->link = UHCI_PTR_TERM;
+ lqh->link = UHCI_PTR_TERM;
}
static void uhci_add_fsbr(struct uhci_hcd *uhci, struct urb *urb)
*/
static void link_async(struct uhci_hcd *uhci, struct uhci_qh *qh)
{
- struct uhci_qh *pqh, *lqh;
+ struct uhci_qh *pqh;
__le32 link_to_new_qh;
- __le32 *extra_link = &link_to_new_qh;
/* Find the predecessor QH for our new one and insert it in the list.
* The list of QHs is expected to be short, so linear search won't
break;
}
list_add(&qh->node, &pqh->node);
- qh->link = pqh->link;
-
- link_to_new_qh = LINK_TO_QH(qh);
-
- /* If this is now the first FSBR QH, take special action */
- if (uhci->fsbr_is_on && pqh->skel < SKEL_FSBR &&
- qh->skel >= SKEL_FSBR) {
- lqh = list_entry(uhci->skel_async_qh->node.prev,
- struct uhci_qh, node);
-
- /* If the new QH is also the last one, we must unlink
- * the terminating skeleton QH and make the new QH point
- * back to itself. */
- if (qh == lqh) {
- qh->link = link_to_new_qh;
- extra_link = &uhci->skel_term_qh->link;
-
- /* Otherwise the last QH must point to the new QH */
- } else
- extra_link = &lqh->link;
- }
/* Link it into the schedule */
+ qh->link = pqh->link;
wmb();
- *extra_link = pqh->link = link_to_new_qh;
+ link_to_new_qh = LINK_TO_QH(qh);
+ pqh->link = link_to_new_qh;
+
+ /* If this is now the first FSBR QH, link the terminating skeleton
+ * QH to it. */
+ if (pqh->skel < SKEL_FSBR && qh->skel >= SKEL_FSBR)
+ uhci->skel_term_qh->link = link_to_new_qh;
}
/*
*/
static void unlink_async(struct uhci_hcd *uhci, struct uhci_qh *qh)
{
- struct uhci_qh *pqh, *lqh;
+ struct uhci_qh *pqh;
__le32 link_to_next_qh = qh->link;
pqh = list_entry(qh->node.prev, struct uhci_qh, node);
-
- /* If this is the first FSBQ QH, take special action */
- if (uhci->fsbr_is_on && pqh->skel < SKEL_FSBR &&
- qh->skel >= SKEL_FSBR) {
- lqh = list_entry(uhci->skel_async_qh->node.prev,
- struct uhci_qh, node);
-
- /* If this QH is also the last one, we must link in
- * the terminating skeleton QH. */
- if (qh == lqh) {
- link_to_next_qh = LINK_TO_QH(uhci->skel_term_qh);
- uhci->skel_term_qh->link = link_to_next_qh;
- wmb();
- qh->link = link_to_next_qh;
-
- /* Otherwise the last QH must point to the new first FSBR QH */
- } else
- lqh->link = link_to_next_qh;
- }
-
pqh->link = link_to_next_qh;
+
+ /* If this was the old first FSBR QH, link the terminating skeleton
+ * QH to the next (new first FSBR) QH. */
+ if (pqh->skel < SKEL_FSBR && qh->skel >= SKEL_FSBR)
+ uhci->skel_term_qh->link = link_to_next_qh;
mb();
}
if (debug > 1 && errbuf) {
/* Print the chain for debugging */
- uhci_show_qh(urbp->qh, errbuf,
+ uhci_show_qh(uhci, urbp->qh, errbuf,
ERRBUF_LEN, 0);
lprintk(errbuf);
}
return retval;
}
- dbg(&udev->dev, "Sending first magic command\n");
+ dbg(&udev->dev, "Sending second magic command\n");
retval = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
0xa2, 0x40, 0, 1, dummy_buffer, 0, 100);
if (retval != 0) {
USB_DEVICE(0x0a46, 0x9601), /* Davicom USB-100 */
.driver_info = (unsigned long)&dm9601_info,
},
+ {
+ USB_DEVICE(0x0a46, 0x6688), /* ZT6688 USB NIC */
+ .driver_info = (unsigned long)&dm9601_info,
+ },
+ {
+ USB_DEVICE(0x0a46, 0x0268), /* ShanTou ST268 USB NIC */
+ .driver_info = (unsigned long)&dm9601_info,
+ },
{}, // END
};
return ret;
}
+/* Returns 0 on success, error on failure */
static int read_mii_word(pegasus_t * pegasus, __u8 phy, __u8 indx, __u16 * regd)
{
int i;
* d[0].NO_CARRIER kicks in only with failed TX.
* ... so monitoring with MII may be safest.
*/
- if (d[0] & NO_CARRIER)
- netif_carrier_off(net);
- else
- netif_carrier_on(net);
+ if (pegasus->features & TRUST_LINK_STATUS) {
+ if (d[5] & LINK_STATUS)
+ netif_carrier_on(net);
+ else
+ netif_carrier_off(net);
+ } else {
+ /* Never set carrier _on_ based on ! NO_CARRIER */
+ if (d[0] & NO_CARRIER)
+ netif_carrier_off(net);
+ }
/* bytes 3-4 == rx_lostpkt, reg 2E/2F */
pegasus->stats.rx_missed_errors += ((d[3] & 0x7f) << 8) | d[4];
pegasus_t *pegasus = netdev_priv(net);
u16 tmp;
- if (!read_mii_word(pegasus, pegasus->phy, MII_BMSR, &tmp))
+ if (read_mii_word(pegasus, pegasus->phy, MII_BMSR, &tmp))
return;
if (tmp & BMSR_LSTATUS)
#define PEGASUS_II 0x80000000
#define HAS_HOME_PNA 0x40000000
+#define TRUST_LINK_STATUS 0x20000000
#define PEGASUS_MTU 1536
#define RX_SKBS 4
PEGASUS_DEV( "Allied Telesyn Int. AT-USB100", VENDOR_ALLIEDTEL, 0xb100,
DEFAULT_GPIO_RESET | PEGASUS_II )
PEGASUS_DEV( "Belkin F5D5050 USB Ethernet", VENDOR_BELKIN, 0x0121,
- DEFAULT_GPIO_RESET | PEGASUS_II )
+ DEFAULT_GPIO_RESET | PEGASUS_II | TRUST_LINK_STATUS )
PEGASUS_DEV( "Billionton USB-100", VENDOR_BILLIONTON, 0x0986,
DEFAULT_GPIO_RESET )
PEGASUS_DEV( "Billionton USBLP-100", VENDOR_BILLIONTON, 0x0987,
static struct usb_device_id id_table [] = {
{ USB_DEVICE(0x0c88, 0x17da) }, /* Kyocera Wireless KPC650/Passport */
- { USB_DEVICE(0x1410, 0x1110) }, /* Novatel Wireless Merlin CDMA */
- { USB_DEVICE(0x1410, 0x1130) }, /* Novatel Wireless S720 CDMA/EV-DO */
- { USB_DEVICE(0x1410, 0x2110) }, /* Novatel Wireless U720 CDMA/EV-DO */
- { USB_DEVICE(0x1410, 0x1430) }, /* Novatel Merlin XU870 HSDPA/3G */
- { USB_DEVICE(0x1410, 0x1100) }, /* ExpressCard34 Qualcomm 3G CDMA */
{ USB_DEVICE(0x413c, 0x8115) }, /* Dell Wireless HSDPA 5500 */
{ },
};
break;
case FT232BM: /* FT232BM chip */
case FT2232C: /* FT2232C chip */
+ case FT232RL:
if (baud <= 3000000) {
div_value = ftdi_232bm_baud_to_divisor(baud);
} else {
/* (It might be a BM because of the iSerialNumber bug,
* but it will still work as an AM device.) */
priv->chip_type = FT8U232AM;
- } else {
+ } else if (version < 0x600) {
/* Assume its an FT232BM (or FT245BM) */
priv->chip_type = FT232BM;
+ } else {
+ /* Assume its an FT232R */
+ priv->chip_type = FT232RL;
}
info("Detected %s", ftdi_chip_name[priv->chip_type]);
}
#include <linux/usb/serial.h>
#include <asm/uaccess.h>
-static int generic_probe(struct usb_interface *interface,
- const struct usb_device_id *id);
-
static int debug;
#ifdef CONFIG_USB_SERIAL_GENERIC
+
+static int generic_probe(struct usb_interface *interface,
+ const struct usb_device_id *id);
+
static __u16 vendor = 0x05f9;
static __u16 product = 0xffff;
.chars_in_buffer = mos7720_chars_in_buffer,
.break_ctl = mos7720_break,
.read_bulk_callback = mos7720_bulk_in_callback,
+ .read_int_callback = mos7720_interrupt_callback,
};
static int __init moschip7720_init(void)
#define HUAWEI_PRODUCT_E220 0x1003
#define NOVATELWIRELESS_VENDOR_ID 0x1410
-#define NOVATELWIRELESS_PRODUCT_U740 0x1400
#define ANYDATA_VENDOR_ID 0x16d5
#define ANYDATA_PRODUCT_ID 0x6501
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_ETNA_KOI_NETWORK) },
{ USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E600) },
{ USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E220) },
- { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID,NOVATELWIRELESS_PRODUCT_U740) },
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1100) }, /* Novatel Merlin XS620/S640 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1110) }, /* Novatel Merlin S620 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1120) }, /* Novatel Merlin EX720 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1130) }, /* Novatel Merlin S720 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1400) }, /* Novatel U730 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1410) }, /* Novatel U740 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1420) }, /* Novatel EU870 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1430) }, /* Novatel Merlin XU870 HSDPA/3G */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x1430) }, /* Novatel XU870 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x2100) }, /* Novatel EV620 CDMA/EV-DO */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x2110) }, /* Novatel Merlin ES620 / Merlin ES720 / Ovation U720 */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x2130) }, /* Novatel Merlin ES620 SM Bus */
+ { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, 0x2410) }, /* Novatel EU740 */
{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ID) },
{ } /* Terminating entry */
};
dbg("%s - %s", __FUNCTION__, serial->type->description);
+ serial->type->shutdown(serial);
+
+ /* return the minor range that this device had */
+ return_serial(serial);
+
for (i = 0; i < serial->num_ports; ++i)
serial->port[i]->open_count = 0;
serial->port[i] = NULL;
}
- if (serial->type->shutdown)
- serial->type->shutdown(serial);
-
- /* return the minor range that this device had */
- return_serial(serial);
-
/* If this is a "fake" port, we have to clean it up here, as it will
* not get cleaned up in port_release() as it was never registered with
* the driver core */
US_SC_DEVICE, US_PR_DEVICE, NULL,
US_FL_FIX_CAPACITY),
+/* Reported by Emil Larsson <emil@swip.net> */
+UNUSUAL_DEV( 0x04b0, 0x0411, 0x0100, 0x0100,
+ "NIKON",
+ "NIKON DSC D80",
+ US_SC_DEVICE, US_PR_DEVICE, NULL,
+ US_FL_FIX_CAPACITY),
+
/* BENQ DC5330
* Reported by Manuel Fombuena <mfombuena@ya.com> and
* Frank Copeland <fjc@thingy.apana.org.au> */
US_SC_DEVICE, US_PR_DEVICE, NULL,
US_FL_FIX_CAPACITY | US_FL_IGNORE_RESIDUE ),
+/*
+ * Patch by Pete Zaitcev <zaitcev@redhat.com>
+ * Report by Mark Patton. Red Hat bz#208928.
+ */
+UNUSUAL_DEV( 0x22b8, 0x4810, 0x0001, 0x0001,
+ "Motorola",
+ "RAZR V3i",
+ US_SC_DEVICE, US_PR_DEVICE, NULL,
+ US_FL_FIX_CAPACITY),
+
/* Reported by Radovan Garabik <garabik@kassiopeia.juls.savba.sk> */
UNUSUAL_DEV( 0x2735, 0x100b, 0x0000, 0x9999,
"MPIO",
This is particularly important to one driver, matroxfb. If
unsure, say N.
-comment "Frambuffer hardware drivers"
+comment "Frame buffer hardware drivers"
depends on FB
config FB_CIRRUS
config FB_AU1200
bool "Au1200 LCD Driver"
- depends on FB && MIPS && SOC_AU1200
+ depends on (FB = y) && MIPS && SOC_AU1200
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
config FB_68328
bool "Motorola 68328 native frame buffer support"
- depends on FB && (M68328 || M68EZ328 || M68VZ328)
+ depends on (FB = y) && (M68328 || M68EZ328 || M68VZ328)
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
config FB_PS3
bool "PS3 GPU framebuffer driver"
- depends on FB && PS3_PS3AV
+ depends on (FB = y) && PS3_PS3AV
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
{
unsigned long flags;
- locomobl_data.brightness = 0;
- locomobl_data.power = 0;
+ locomolcd_bl_device->props.brightness = 0;
+ locomolcd_bl_device->props.power = 0;
locomolcd_set_intensity(locomolcd_bl_device);
backlight_device_unregister(locomolcd_bl_device);
u8 temp;
struct backlight_device *progear_backlight_device;
- pmu_dev = pci_get_device(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M7101, 0);
+ pmu_dev = pci_get_device(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M7101, NULL);
if (!pmu_dev) {
printk("ALI M7101 PMU not found.\n");
return -ENODEV;
}
- sb_dev = pci_get_device(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, 0);
+ sb_dev = pci_get_device(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, NULL);
if (!sb_dev) {
printk("ALI 1533 SB not found.\n");
pci_dev_put(pmu_dev);
* Initialisation
*/
-static void
-bw2_init_fix(struct fb_info *info, int linebytes)
+static void __devinit bw2_init_fix(struct fb_info *info, int linebytes)
{
strlcpy(info->fix.id, "bwtwo", sizeof(info->fix.id));
info->fix.accel = FB_ACCEL_SUN_BWTWO;
}
-static u8 bw2regs_1600[] __initdata = {
+static u8 bw2regs_1600[] __devinitdata = {
0x14, 0x8b, 0x15, 0x28, 0x16, 0x03, 0x17, 0x13,
0x18, 0x7b, 0x19, 0x05, 0x1a, 0x34, 0x1b, 0x2e,
0x1c, 0x00, 0x1d, 0x0a, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x21, 0
};
-static u8 bw2regs_ecl[] __initdata = {
+static u8 bw2regs_ecl[] __devinitdata = {
0x14, 0x65, 0x15, 0x1e, 0x16, 0x04, 0x17, 0x0c,
0x18, 0x5e, 0x19, 0x03, 0x1a, 0xa7, 0x1b, 0x23,
0x1c, 0x00, 0x1d, 0x08, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x20, 0
};
-static u8 bw2regs_analog[] __initdata = {
+static u8 bw2regs_analog[] __devinitdata = {
0x14, 0xbb, 0x15, 0x2b, 0x16, 0x03, 0x17, 0x13,
0x18, 0xb0, 0x19, 0x03, 0x1a, 0xa6, 0x1b, 0x22,
0x1c, 0x01, 0x1d, 0x05, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x20, 0
};
-static u8 bw2regs_76hz[] __initdata = {
+static u8 bw2regs_76hz[] __devinitdata = {
0x14, 0xb7, 0x15, 0x27, 0x16, 0x03, 0x17, 0x0f,
0x18, 0xae, 0x19, 0x03, 0x1a, 0xae, 0x1b, 0x2a,
0x1c, 0x01, 0x1d, 0x09, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x24, 0
};
-static u8 bw2regs_66hz[] __initdata = {
+static u8 bw2regs_66hz[] __devinitdata = {
0x14, 0xbb, 0x15, 0x2b, 0x16, 0x04, 0x17, 0x14,
0x18, 0xae, 0x19, 0x03, 0x1a, 0xa8, 0x1b, 0x24,
0x1c, 0x01, 0x1d, 0x05, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x20, 0
};
-static void bw2_do_default_mode(struct bw2_par *par, struct fb_info *info,
- int *linebytes)
+static void __devinit bw2_do_default_mode(struct bw2_par *par,
+ struct fb_info *info,
+ int *linebytes)
{
u8 status, mon;
u8 *p;
* Initialisation
*/
-static void cg14_init_fix(struct fb_info *info, int linebytes, struct device_node *dp)
+static void __devinit cg14_init_fix(struct fb_info *info, int linebytes,
+ struct device_node *dp)
{
const char *name = dp->name;
info->fix.accel = FB_ACCEL_SUN_CG14;
}
-static struct sbus_mmap_map __cg14_mmap_map[CG14_MMAP_ENTRIES] __initdata = {
+static struct sbus_mmap_map __cg14_mmap_map[CG14_MMAP_ENTRIES] __devinitdata = {
{
.voff = CG14_REGS,
.poff = 0x80000000,
* @blank_mode: the blank mode we want.
* @info: frame buffer structure that represents a single frame buffer
*/
-static int
-cg3_blank(int blank, struct fb_info *info)
+static int cg3_blank(int blank, struct fb_info *info)
{
struct cg3_par *par = (struct cg3_par *) info->par;
struct cg3_regs __iomem *regs = par->regs;
* Initialisation
*/
-static void
-cg3_init_fix(struct fb_info *info, int linebytes, struct device_node *dp)
+static void __devinit cg3_init_fix(struct fb_info *info, int linebytes,
+ struct device_node *dp)
{
strlcpy(info->fix.id, dp->name, sizeof(info->fix.id));
info->fix.accel = FB_ACCEL_SUN_CGTHREE;
}
-static void cg3_rdi_maybe_fixup_var(struct fb_var_screeninfo *var,
- struct device_node *dp)
+static void __devinit cg3_rdi_maybe_fixup_var(struct fb_var_screeninfo *var,
+ struct device_node *dp)
{
char *params;
char *p;
}
}
-static u8 cg3regvals_66hz[] __initdata = { /* 1152 x 900, 66 Hz */
+static u8 cg3regvals_66hz[] __devinitdata = { /* 1152 x 900, 66 Hz */
0x14, 0xbb, 0x15, 0x2b, 0x16, 0x04, 0x17, 0x14,
0x18, 0xae, 0x19, 0x03, 0x1a, 0xa8, 0x1b, 0x24,
0x1c, 0x01, 0x1d, 0x05, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x20, 0
};
-static u8 cg3regvals_76hz[] __initdata = { /* 1152 x 900, 76 Hz */
+static u8 cg3regvals_76hz[] __devinitdata = { /* 1152 x 900, 76 Hz */
0x14, 0xb7, 0x15, 0x27, 0x16, 0x03, 0x17, 0x0f,
0x18, 0xae, 0x19, 0x03, 0x1a, 0xae, 0x1b, 0x2a,
0x1c, 0x01, 0x1d, 0x09, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x24, 0
};
-static u8 cg3regvals_rdi[] __initdata = { /* 640 x 480, cgRDI */
+static u8 cg3regvals_rdi[] __devinitdata = { /* 640 x 480, cgRDI */
0x14, 0x70, 0x15, 0x20, 0x16, 0x08, 0x17, 0x10,
0x18, 0x06, 0x19, 0x02, 0x1a, 0x31, 0x1b, 0x51,
0x1c, 0x06, 0x1d, 0x0c, 0x1e, 0xff, 0x1f, 0x01,
0x10, 0x22, 0
};
-static u8 *cg3_regvals[] __initdata = {
+static u8 *cg3_regvals[] __devinitdata = {
cg3regvals_66hz, cg3regvals_76hz, cg3regvals_rdi
};
-static u_char cg3_dacvals[] __initdata = {
+static u_char cg3_dacvals[] __devinitdata = {
4, 0xff, 5, 0x00, 6, 0x70, 7, 0x00, 0
};
-static void cg3_do_default_mode(struct cg3_par *par)
+static void __devinit cg3_do_default_mode(struct cg3_par *par)
{
enum cg3_type type;
u8 *p;
return 0;
}
-static int __devinit cg3_probe(struct of_device *dev, const struct of_device_id *match)
+static int __devinit cg3_probe(struct of_device *dev,
+ const struct of_device_id *match)
{
struct of_device *op = to_of_device(&dev->dev);
u32 value2;
};
+#define FFB_DAC_UCTRL 0x1001 /* User Control */
+#define FFB_DAC_UCTRL_MANREV 0x00000f00 /* 4-bit Manufacturing Revision */
+#define FFB_DAC_UCTRL_MANREV_SHIFT 8
+#define FFB_DAC_TGEN 0x6000 /* Timing Generator */
+#define FFB_DAC_TGEN_VIDE 0x00000001 /* Video Enable */
+#define FFB_DAC_DID 0x8000 /* Device Identification */
+#define FFB_DAC_DID_PNUM 0x0ffff000 /* Device Part Number */
+#define FFB_DAC_DID_PNUM_SHIFT 12
+#define FFB_DAC_DID_REV 0xf0000000 /* Device Revision */
+#define FFB_DAC_DID_REV_SHIFT 28
+
+#define FFB_DAC_CUR_CTRL 0x100
+#define FFB_DAC_CUR_CTRL_P0 0x00000001
+#define FFB_DAC_CUR_CTRL_P1 0x00000002
+
struct ffb_par {
spinlock_t lock;
struct ffb_fbc __iomem *fbc;
struct ffb_dac __iomem *dac;
u32 flags;
-#define FFB_FLAG_AFB 0x00000001
-#define FFB_FLAG_BLANKED 0x00000002
+#define FFB_FLAG_AFB 0x00000001 /* AFB m3 or m6 */
+#define FFB_FLAG_BLANKED 0x00000002 /* screen is blanked */
+#define FFB_FLAG_INVCURSOR 0x00000004 /* DAC has inverted cursor logic */
u32 fg_cache __attribute__((aligned (8)));
u32 bg_cache;
unsigned long physbase;
unsigned long fbsize;
- int dac_rev;
int board_type;
};
FFBWait(par);
/* Disable cursor. */
- upa_writel(0x100, &dac->type2);
- if (par->dac_rev <= 2)
+ upa_writel(FFB_DAC_CUR_CTRL, &dac->type2);
+ if (par->flags & FFB_FLAG_INVCURSOR)
upa_writel(0, &dac->value2);
else
- upa_writel(3, &dac->value2);
+ upa_writel((FFB_DAC_CUR_CTRL_P0 |
+ FFB_DAC_CUR_CTRL_P1), &dac->value2);
spin_unlock_irqrestore(&par->lock, flags);
}
struct ffb_par *par = (struct ffb_par *) info->par;
struct ffb_dac __iomem *dac = par->dac;
unsigned long flags;
- u32 tmp;
+ u32 val;
+ int i;
spin_lock_irqsave(&par->lock, flags);
FFBWait(par);
+ upa_writel(FFB_DAC_TGEN, &dac->type);
+ val = upa_readl(&dac->value);
switch (blank) {
case FB_BLANK_UNBLANK: /* Unblanking */
- upa_writel(0x6000, &dac->type);
- tmp = (upa_readl(&dac->value) | 0x1);
- upa_writel(0x6000, &dac->type);
- upa_writel(tmp, &dac->value);
+ val |= FFB_DAC_TGEN_VIDE;
par->flags &= ~FFB_FLAG_BLANKED;
break;
case FB_BLANK_VSYNC_SUSPEND: /* VESA blank (vsync off) */
case FB_BLANK_HSYNC_SUSPEND: /* VESA blank (hsync off) */
case FB_BLANK_POWERDOWN: /* Poweroff */
- upa_writel(0x6000, &dac->type);
- tmp = (upa_readl(&dac->value) & ~0x1);
- upa_writel(0x6000, &dac->type);
- upa_writel(tmp, &dac->value);
+ val &= ~FFB_DAC_TGEN_VIDE;
par->flags |= FFB_FLAG_BLANKED;
break;
}
+ upa_writel(FFB_DAC_TGEN, &dac->type);
+ upa_writel(val, &dac->value);
+ for (i = 0; i < 10; i++) {
+ upa_writel(FFB_DAC_TGEN, &dac->type);
+ upa_readl(&dac->value);
+ }
spin_unlock_irqrestore(&par->lock, flags);
struct ffb_dac __iomem *dac;
struct all_info *all;
int err;
+ u32 dac_pnum, dac_rev, dac_mrev;
all = kzalloc(sizeof(*all), GFP_KERNEL);
if (!all)
if ((upa_readl(&fbc->ucsr) & FFB_UCSR_ALL_ERRORS) != 0)
upa_writel(FFB_UCSR_ALL_ERRORS, &fbc->ucsr);
- ffb_switch_from_graph(&all->par);
-
dac = all->par.dac;
- upa_writel(0x8000, &dac->type);
- all->par.dac_rev = upa_readl(&dac->value) >> 0x1c;
+ upa_writel(FFB_DAC_DID, &dac->type);
+ dac_pnum = upa_readl(&dac->value);
+ dac_rev = (dac_pnum & FFB_DAC_DID_REV) >> FFB_DAC_DID_REV_SHIFT;
+ dac_pnum = (dac_pnum & FFB_DAC_DID_PNUM) >> FFB_DAC_DID_PNUM_SHIFT;
+
+ upa_writel(FFB_DAC_UCTRL, &dac->type);
+ dac_mrev = upa_readl(&dac->value);
+ dac_mrev = (dac_mrev & FFB_DAC_UCTRL_MANREV) >>
+ FFB_DAC_UCTRL_MANREV_SHIFT;
/* Elite3D has different DAC revision numbering, and no DAC revisions
- * have the reversed meaning of cursor enable.
+ * have the reversed meaning of cursor enable. Otherwise, Pacifica 1
+ * ramdacs with manufacturing revision less than 3 have inverted
+ * cursor logic. We identify Pacifica 1 as not Pacifica 2, the
+ * latter having a part number value of 0x236e.
*/
- if (all->par.flags & FFB_FLAG_AFB)
- all->par.dac_rev = 10;
+ if ((all->par.flags & FFB_FLAG_AFB) || dac_pnum == 0x236e) {
+ all->par.flags &= ~FFB_FLAG_INVCURSOR;
+ } else {
+ if (dac_mrev < 3)
+ all->par.flags |= FFB_FLAG_INVCURSOR;
+ }
+
+ ffb_switch_from_graph(&all->par);
/* Unblank it just to be sure. When there are multiple
* FFB/AFB cards in the system, or it is not the OBP
dev_set_drvdata(&op->dev, all);
- printk("%s: %s at %016lx, type %d, DAC revision %d\n",
+ printk("%s: %s at %016lx, type %d, "
+ "DAC pnum[%x] rev[%d] manuf_rev[%d]\n",
dp->full_name,
((all->par.flags & FFB_FLAG_AFB) ? "AFB" : "FFB"),
- all->par.physbase, all->par.board_type, all->par.dac_rev);
+ all->par.physbase, all->par.board_type,
+ dac_pnum, dac_rev, dac_mrev);
return 0;
}
#define MAX_LEVEL 0x534
#define LEVEL_STEP ((MAX_LEVEL - MIN_LEVEL) / FB_BACKLIGHT_MAX)
-static struct backlight_properties riva_bl_data;
-
static int riva_bl_get_level_brightness(struct riva_par *par,
int level)
{
FB_BACKLIGHT_MAX);
bd->props.max_brightness = FB_BACKLIGHT_LEVELS - 1;
- bd->props.brightness = riva_bl_data.max_brightness;
+ bd->props.brightness = bd->props.max_brightness;
bd->props.power = FB_BLANK_UNBLANK;
backlight_update_status(bd);
static void __devexit s3_pci_remove(struct pci_dev *dev)
{
struct fb_info *info = pci_get_drvdata(dev);
- struct s3fb_info *par = info->par;
if (info) {
#ifdef CONFIG_MTRR
+ struct s3fb_info *par = info->par;
+
if (par->mtrr_reg >= 0) {
mtrr_del(par->mtrr_reg, 0, 0);
par->mtrr_reg = -1;
BCI_SEND(0);
BCI_SEND(BCI_CMD_SETREG | (1 << 16) | BCI_GBD2);
BCI_SEND(GlobalBitmapDescriptor);
+
+ /*
+ * I don't know why, sending this twice fixes the intial black screen,
+ * prevents X from crashing at least in Toshiba laptops with SavageIX.
+ * --Tony
+ */
+ par->bci_ptr = 0;
+ par->SavageWaitFifo(par, 4);
+
+ BCI_SEND(BCI_CMD_SETREG | (1 << 16) | BCI_GBD1);
+ BCI_SEND(0);
+ BCI_SEND(BCI_CMD_SETREG | (1 << 16) | BCI_GBD2);
+ BCI_SEND(GlobalBitmapDescriptor);
}
static void savagefb_set_clip(struct fb_info *info)
#ifdef SAVAGEFB_DEBUG
/* This function is used to debug, it prints out the contents of s3 regs */
-static void SavagePrintRegs(void)
+static void SavagePrintRegs(struct savagefb_par *par)
{
unsigned char i;
int vgaCRIndex = 0x3d4;
savagefb_set_fix(info);
savagefb_set_clip(info);
- SavagePrintRegs();
+ SavagePrintRegs(par);
return 0;
}
int video_len;
DBG("savagefb_probe");
- SavagePrintRegs();
info = framebuffer_alloc(sizeof(struct savagefb_par), &dev->dev);
if (!info)
r_dprintk("sst_dac_write(%#x, %#x)\n", reg, val);
reg &= 0x07;
__sst_write(vbase, DAC_DATA,(((u32)reg << 8)) | (u32)val);
+ __sst_wait_idle(vbase);
}
/* indexed access to ti/att dacs */
extern struct file_system_type v9fs_fs_type;
extern const struct address_space_operations v9fs_addr_operations;
extern const struct file_operations v9fs_file_operations;
-extern const struct file_operations v9fs_cached_file_operations;
extern const struct file_operations v9fs_dir_operations;
extern struct dentry_operations v9fs_dentry_operations;
extern struct dentry_operations v9fs_cached_dentry_operations;
#include "v9fs_vfs.h"
#include "fid.h"
+static const struct file_operations v9fs_cached_file_operations;
+
/**
* v9fs_file_open - open a file (or directory)
* @inode: inode to be opened
return total;
}
-const struct file_operations v9fs_cached_file_operations = {
+static const struct file_operations v9fs_cached_file_operations = {
.llseek = generic_file_llseek,
.read = do_sync_read,
.aio_read = generic_file_aio_read,
file_inode = file->d_inode;
sb = file_inode->i_sb;
v9ses = v9fs_inode2v9ses(file_inode);
- v9fid = v9fs_fid_lookup(file);
+ v9fid = v9fs_fid_clone(file);
if(IS_ERR(v9fid))
return PTR_ERR(v9fid);
0);
if (IS_ERR((void *)info->mmap_base)) {
up_write(&ctx->mm->mmap_sem);
- printk("mmap err: %ld\n", -info->mmap_base);
info->mmap_size = 0;
aio_free_ring(ctx);
return -EAGAIN;
if (inf) {
struct autofs_sb_info *sbi = autofs4_sbi(de->d_sb);
- inf->dentry = NULL;
- inf->inode = NULL;
-
if (sbi) {
spin_lock(&sbi->rehash_lock);
if (!list_empty(&inf->rehash))
spin_unlock(&sbi->rehash_lock);
}
+ inf->dentry = NULL;
+ inf->inode = NULL;
+
autofs4_free_ino(inf);
}
}
#define INTERPRETER_ELF 2
#ifndef STACK_RND_MASK
-#define STACK_RND_MASK 0x7ff /* with 4K pages 8MB of VA */
+#define STACK_RND_MASK (0x7ff >> (PAGE_SHIFT - 12)) /* 8MB of VA */
#endif
static unsigned long randomize_stack_top(unsigned long stack_top)
DUMP_SEEK(PAGE_SIZE);
} else {
if (page == ZERO_PAGE(addr)) {
- DUMP_SEEK(PAGE_SIZE);
+ if (!dump_seek(file, PAGE_SIZE)) {
+ page_cache_release(page);
+ goto end_coredump;
+ }
} else {
void *kaddr;
flush_cache_page(vma, addr,
int executable_stack;
int retval, i;
+ kdebug("____ LOAD %d ____", current->pid);
+
memset(&exec_params, 0, sizeof(exec_params));
memset(&interp_params, 0, sizeof(interp_params));
if (mm) {
if (phdr->p_flags & PF_X) {
- mm->start_code = seg->addr;
- mm->end_code = seg->addr + phdr->p_memsz;
+ if (!mm->start_code) {
+ mm->start_code = seg->addr;
+ mm->end_code = seg->addr +
+ phdr->p_memsz;
+ }
} else if (!mm->start_data) {
mm->start_data = seg->addr;
#ifndef CONFIG_MMU
if (mm) {
if (phdr->p_flags & PF_X) {
- mm->start_code = maddr;
- mm->end_code = maddr + phdr->p_memsz;
+ if (!mm->start_code) {
+ mm->start_code = maddr;
+ mm->end_code = maddr + phdr->p_memsz;
+ }
} else if (!mm->start_data) {
mm->start_data = maddr;
mm->end_data = maddr + phdr->p_memsz;
DUMP_SEEK(file->f_pos + PAGE_SIZE);
}
else if (page == ZERO_PAGE(addr)) {
- DUMP_SEEK(file->f_pos + PAGE_SIZE);
page_cache_release(page);
+ DUMP_SEEK(file->f_pos + PAGE_SIZE);
}
else {
void *kaddr;
/* temporary */
if (major == 0) {
for (i = ARRAY_SIZE(chrdevs)-1; i > 0; i--) {
- if (is_lanana_major(i))
- continue;
if (chrdevs[i] == NULL)
break;
}
Fix hang (in i_size_read) when simultaneous size update of same remote file
on smp system corrupts sequence number. Do not reread unnecessarily partial page
(which we are about to overwrite anyway) when writing out file opened rw.
+When DOS attribute of file on non-Unix server's file changes on the server side
+from read-only back to read-write, reflect this change in default file mode
+(we had been leaving a file's mode read-only until the inode were reloaded).
+Allow setting of attribute back to ATTR_NORMAL (removing readonly dos attribute
+when archive dos attribute not set and we are changing mode back to writeable
+on server which does not support the Unix Extensions).
Version 1.47
------------
*/
#define CIFS_NO_HANDLE 0xFFFF
-#define NO_CHANGE_64 0xFFFFFFFFFFFFFFFFULL
+#define NO_CHANGE_64 cpu_to_le64(0xFFFFFFFFFFFFFFFFULL)
#define NO_CHANGE_32 0xFFFFFFFFUL
/* IPC$ in ASCII */
calls including posix open
and posix unlink */
#ifdef CONFIG_CIFS_POSIX
-#define CIFS_UNIX_CAP_MASK 0x0000003b
+/* Can not set pathnames cap yet until we send new posix create SMB since
+ otherwise server can treat such handles opened with older ntcreatex
+ (by a new client which knows how to send posix path ops)
+ as non-posix handles (can affect write behavior with byte range locks.
+ We can add back in POSIX_PATH_OPS cap when Posix Create/Mkdir finished */
+/* #define CIFS_UNIX_CAP_MASK 0x0000003b */
+#define CIFS_UNIX_CAP_MASK 0x0000001b
#else
#define CIFS_UNIX_CAP_MASK 0x00000013
#endif /* CONFIG_CIFS_POSIX */
mode e.g. 555 */
if (cifsInfo->cifsAttrs & ATTR_READONLY)
inode->i_mode &= ~(S_IWUGO);
+ else if ((inode->i_mode & S_IWUGO) == 0)
+ /* the ATTR_READONLY flag may have been */
+ /* changed on server -- set any w bits */
+ /* allowed by mnt_file_mode */
+ inode->i_mode |= (S_IWUGO &
+ cifs_sb->mnt_file_mode);
/* BB add code here -
validate if device or weird share or device type? */
}
struct cifsFileInfo *open_file = NULL;
FILE_BASIC_INFO time_buf;
int set_time = FALSE;
+ int set_dosattr = FALSE;
__u64 mode = 0xFFFFFFFFFFFFFFFFULL;
__u64 uid = 0xFFFFFFFFFFFFFFFFULL;
__u64 gid = 0xFFFFFFFFFFFFFFFFULL;
else if (attrs->ia_valid & ATTR_MODE) {
rc = 0;
if ((mode & S_IWUGO) == 0) /* not writeable */ {
- if ((cifsInode->cifsAttrs & ATTR_READONLY) == 0)
+ if ((cifsInode->cifsAttrs & ATTR_READONLY) == 0) {
+ set_dosattr = TRUE;
time_buf.Attributes =
cpu_to_le32(cifsInode->cifsAttrs |
ATTR_READONLY);
+ }
} else if ((mode & S_IWUGO) == S_IWUGO) {
- if (cifsInode->cifsAttrs & ATTR_READONLY)
+ if (cifsInode->cifsAttrs & ATTR_READONLY) {
+ set_dosattr = TRUE;
time_buf.Attributes =
cpu_to_le32(cifsInode->cifsAttrs &
(~ATTR_READONLY));
+ /* Windows ignores set to zero */
+ if(time_buf.Attributes == 0)
+ time_buf.Attributes |=
+ cpu_to_le32(ATTR_NORMAL);
+ }
}
/* BB to be implemented -
via Windows security descriptors or streams */
} else
time_buf.ChangeTime = 0;
- if (set_time || time_buf.Attributes) {
+ if (set_time || set_dosattr) {
time_buf.CreationTime = 0; /* do not change */
/* In the future we should experiment - try setting timestamps
via Handle (SetFileInfo) instead of by path */
tmp_inode->i_mode |= S_IFREG;
if (attr & ATTR_READONLY)
tmp_inode->i_mode &= ~(S_IWUGO);
+ else if ((tmp_inode->i_mode & S_IWUGO) == 0)
+ /* the ATTR_READONLY flag may have been changed on */
+ /* server -- set any w bits allowed by mnt_file_mode */
+ tmp_inode->i_mode |= (S_IWUGO & cifs_sb->mnt_file_mode);
} /* could add code here - to validate if device or weird share type? */
/* can not fill in nlink here as in qpathinfo version and Unx search */
HANDLE_IOCTL(I2C_SMBUS, do_i2c_smbus_ioctl)
/* wireless */
HANDLE_IOCTL(SIOCGIWRANGE, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCGIWPRIV, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCGIWSTATS, do_wireless_ioctl)
HANDLE_IOCTL(SIOCSIWSPY, do_wireless_ioctl)
HANDLE_IOCTL(SIOCGIWSPY, do_wireless_ioctl)
HANDLE_IOCTL(SIOCSIWTHRSPY, do_wireless_ioctl)
HANDLE_IOCTL(SIOCGIWTHRSPY, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCSIWMLME, do_wireless_ioctl)
HANDLE_IOCTL(SIOCGIWAPLIST, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCSIWSCAN, do_wireless_ioctl)
HANDLE_IOCTL(SIOCGIWSCAN, do_wireless_ioctl)
HANDLE_IOCTL(SIOCSIWESSID, do_wireless_ioctl)
HANDLE_IOCTL(SIOCGIWESSID, do_wireless_ioctl)
HANDLE_IOCTL(SIOCGIWNICKN, do_wireless_ioctl)
HANDLE_IOCTL(SIOCSIWENCODE, do_wireless_ioctl)
HANDLE_IOCTL(SIOCGIWENCODE, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCSIWGENIE, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCGIWGENIE, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCSIWENCODEEXT, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCGIWENCODEEXT, do_wireless_ioctl)
+HANDLE_IOCTL(SIOCSIWPMKSA, do_wireless_ioctl)
HANDLE_IOCTL(SIOCSIFBR, old_bridge_ioctl)
HANDLE_IOCTL(SIOCGIFBR, old_bridge_ioctl)
HANDLE_IOCTL(RTC_IRQP_READ32, rtc_ioctl)
err = -ENOMEM;
dentry = d_alloc(configfs_sb->s_root, &name);
- if (!dentry)
- goto out_release;
-
- d_add(dentry, NULL);
+ if (dentry) {
+ d_add(dentry, NULL);
- err = configfs_attach_group(sd->s_element, &group->cg_item,
- dentry);
- if (!err)
- dentry = NULL;
- else
- d_delete(dentry);
+ err = configfs_attach_group(sd->s_element, &group->cg_item,
+ dentry);
+ if (err) {
+ d_delete(dentry);
+ dput(dentry);
+ }
+ }
mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex);
- if (dentry) {
- dput(dentry);
-out_release:
- unlink_group(group);
- configfs_release_fs();
+ if (err) {
+ unlink_group(group);
+ configfs_release_fs();
}
return err;
*/
static void ecryptfs_d_release(struct dentry *dentry)
{
- struct dentry *lower_dentry;
-
- lower_dentry = ecryptfs_dentry_to_lower(dentry);
- if (ecryptfs_dentry_to_private(dentry))
+ if (ecryptfs_dentry_to_private(dentry)) {
+ if (ecryptfs_dentry_to_lower(dentry)) {
+ mntput(ecryptfs_dentry_to_lower_mnt(dentry));
+ dput(ecryptfs_dentry_to_lower(dentry));
+ }
kmem_cache_free(ecryptfs_dentry_info_cache,
ecryptfs_dentry_to_private(dentry));
- if (lower_dentry) {
- struct vfsmount *lower_mnt =
- ecryptfs_dentry_to_lower_mnt(dentry);
-
- mntput(lower_mnt);
- dput(lower_dentry);
}
return;
}
* name into corename, which must have space for at least
* CORENAME_MAX_SIZE bytes plus one byte for the zero terminator.
*/
-static void format_corename(char *corename, const char *pattern, long signr)
+static int format_corename(char *corename, const char *pattern, long signr)
{
const char *pat_ptr = pattern;
char *out_ptr = corename;
char *const out_end = corename + CORENAME_MAX_SIZE;
int rc;
int pid_in_pattern = 0;
+ int ispipe = 0;
+
+ if (*pattern == '|')
+ ispipe = 1;
/* Repeat as long as we have more pattern to process and more output
space */
*
* If core_pattern does not include a %p (as is the default)
* and core_uses_pid is set, then .%pid will be appended to
- * the filename */
- if (!pid_in_pattern
+ * the filename. Do not do this for piped commands. */
+ if (!ispipe && !pid_in_pattern
&& (core_uses_pid || atomic_read(¤t->mm->mm_users) != 1)) {
rc = snprintf(out_ptr, out_end - out_ptr,
".%d", current->tgid);
goto out;
out_ptr += rc;
}
- out:
+out:
*out_ptr = 0;
+ return ispipe;
}
static void zap_process(struct task_struct *start)
* uses lock_kernel()
*/
lock_kernel();
- format_corename(corename, core_pattern, signr);
+ ispipe = format_corename(corename, core_pattern, signr);
unlock_kernel();
- if (corename[0] == '|') {
+ if (ispipe) {
/* SIGPIPE can happen, but it's just never processed */
if(call_usermodehelper_pipe(corename+1, NULL, NULL, &file)) {
printk(KERN_INFO "Core dump to %s pipe failed\n",
corename);
goto fail_unlock;
}
- ispipe = 1;
} else
file = filp_open(corename,
O_CREAT | 2 | O_NOFOLLOW | O_LARGEFILE | flag,
return ext3_journal_get_write_access(handle, bh);
}
-/*
- * The idea of this helper function is following:
- * if prepare_write has allocated some blocks, but not all of them, the
- * transaction must include the content of the newly allocated blocks.
- * This content is expected to be set to zeroes by block_prepare_write().
- * 2006/10/14 SAW
- */
-static int ext3_prepare_failure(struct file *file, struct page *page,
- unsigned from, unsigned to)
-{
- struct address_space *mapping;
- struct buffer_head *bh, *head, *next;
- unsigned block_start, block_end;
- unsigned blocksize;
- int ret;
- handle_t *handle = ext3_journal_current_handle();
-
- mapping = page->mapping;
- if (ext3_should_writeback_data(mapping->host)) {
- /* optimization: no constraints about data */
-skip:
- return ext3_journal_stop(handle);
- }
-
- head = page_buffers(page);
- blocksize = head->b_size;
- for ( bh = head, block_start = 0;
- bh != head || !block_start;
- block_start = block_end, bh = next)
- {
- next = bh->b_this_page;
- block_end = block_start + blocksize;
- if (block_end <= from)
- continue;
- if (block_start >= to) {
- block_start = to;
- break;
- }
- if (!buffer_mapped(bh))
- /* prepare_write failed on this bh */
- break;
- if (ext3_should_journal_data(mapping->host)) {
- ret = do_journal_get_write_access(handle, bh);
- if (ret) {
- ext3_journal_stop(handle);
- return ret;
- }
- }
- /*
- * block_start here becomes the first block where the current iteration
- * of prepare_write failed.
- */
- }
- if (block_start <= from)
- goto skip;
-
- /* commit allocated and zeroed buffers */
- return mapping->a_ops->commit_write(file, page, from, block_start);
-}
-
static int ext3_prepare_write(struct file *file, struct page *page,
unsigned from, unsigned to)
{
struct inode *inode = page->mapping->host;
- int ret, ret2;
- int needed_blocks = ext3_writepage_trans_blocks(inode);
+ int ret, needed_blocks = ext3_writepage_trans_blocks(inode);
handle_t *handle;
int retries = 0;
retry:
handle = ext3_journal_start(inode, needed_blocks);
- if (IS_ERR(handle))
- return PTR_ERR(handle);
+ if (IS_ERR(handle)) {
+ ret = PTR_ERR(handle);
+ goto out;
+ }
if (test_opt(inode->i_sb, NOBH) && ext3_should_writeback_data(inode))
ret = nobh_prepare_write(page, from, to, ext3_get_block);
else
ret = block_prepare_write(page, from, to, ext3_get_block);
if (ret)
- goto failure;
+ goto prepare_write_failed;
if (ext3_should_journal_data(inode)) {
ret = walk_page_buffers(handle, page_buffers(page),
from, to, NULL, do_journal_get_write_access);
- if (ret)
- /* fatal error, just put the handle and return */
- journal_stop(handle);
}
- return ret;
-
-failure:
- ret2 = ext3_prepare_failure(file, page, from, to);
- if (ret2 < 0)
- return ret2;
+prepare_write_failed:
+ if (ret)
+ ext3_journal_stop(handle);
if (ret == -ENOSPC && ext3_should_retry_alloc(inode->i_sb, &retries))
goto retry;
- /* retry number exceeded, or other error like -EDQUOT */
+out:
return ret;
}
BHDR(bh)->h_refcount = cpu_to_le32(
le32_to_cpu(BHDR(bh)->h_refcount) - 1);
error = ext3_journal_dirty_metadata(handle, bh);
- handle->h_sync = 1;
+ if (IS_SYNC(inode))
+ handle->h_sync = 1;
DQUOT_FREE_BLOCK(inode, 1);
ea_bdebug(bh, "refcount now=%d; releasing",
le32_to_cpu(BHDR(bh)->h_refcount));
return ext4_journal_get_write_access(handle, bh);
}
-/*
- * The idea of this helper function is following:
- * if prepare_write has allocated some blocks, but not all of them, the
- * transaction must include the content of the newly allocated blocks.
- * This content is expected to be set to zeroes by block_prepare_write().
- * 2006/10/14 SAW
- */
-static int ext4_prepare_failure(struct file *file, struct page *page,
- unsigned from, unsigned to)
-{
- struct address_space *mapping;
- struct buffer_head *bh, *head, *next;
- unsigned block_start, block_end;
- unsigned blocksize;
- int ret;
- handle_t *handle = ext4_journal_current_handle();
-
- mapping = page->mapping;
- if (ext4_should_writeback_data(mapping->host)) {
- /* optimization: no constraints about data */
-skip:
- return ext4_journal_stop(handle);
- }
-
- head = page_buffers(page);
- blocksize = head->b_size;
- for ( bh = head, block_start = 0;
- bh != head || !block_start;
- block_start = block_end, bh = next)
- {
- next = bh->b_this_page;
- block_end = block_start + blocksize;
- if (block_end <= from)
- continue;
- if (block_start >= to) {
- block_start = to;
- break;
- }
- if (!buffer_mapped(bh))
- /* prepare_write failed on this bh */
- break;
- if (ext4_should_journal_data(mapping->host)) {
- ret = do_journal_get_write_access(handle, bh);
- if (ret) {
- ext4_journal_stop(handle);
- return ret;
- }
- }
- /*
- * block_start here becomes the first block where the current iteration
- * of prepare_write failed.
- */
- }
- if (block_start <= from)
- goto skip;
-
- /* commit allocated and zeroed buffers */
- return mapping->a_ops->commit_write(file, page, from, block_start);
-}
-
static int ext4_prepare_write(struct file *file, struct page *page,
unsigned from, unsigned to)
{
struct inode *inode = page->mapping->host;
- int ret, ret2;
- int needed_blocks = ext4_writepage_trans_blocks(inode);
+ int ret, needed_blocks = ext4_writepage_trans_blocks(inode);
handle_t *handle;
int retries = 0;
retry:
handle = ext4_journal_start(inode, needed_blocks);
- if (IS_ERR(handle))
- return PTR_ERR(handle);
+ if (IS_ERR(handle)) {
+ ret = PTR_ERR(handle);
+ goto out;
+ }
if (test_opt(inode->i_sb, NOBH) && ext4_should_writeback_data(inode))
ret = nobh_prepare_write(page, from, to, ext4_get_block);
else
ret = block_prepare_write(page, from, to, ext4_get_block);
if (ret)
- goto failure;
+ goto prepare_write_failed;
if (ext4_should_journal_data(inode)) {
ret = walk_page_buffers(handle, page_buffers(page),
from, to, NULL, do_journal_get_write_access);
- if (ret)
- /* fatal error, just put the handle and return */
- ext4_journal_stop(handle);
}
- return ret;
-
-failure:
- ret2 = ext4_prepare_failure(file, page, from, to);
- if (ret2 < 0)
- return ret2;
+prepare_write_failed:
+ if (ret)
+ ext4_journal_stop(handle);
if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
goto retry;
- /* retry number exceeded, or other error like -EDQUOT */
+out:
return ret;
}
.d_revalidate = fuse_dentry_revalidate,
};
-static int valid_mode(int m)
+int fuse_valid_type(int m)
{
return S_ISREG(m) || S_ISDIR(m) || S_ISLNK(m) || S_ISCHR(m) ||
S_ISBLK(m) || S_ISFIFO(m) || S_ISSOCK(m);
fuse_put_request(fc, req);
/* Zero nodeid is same as -ENOENT, but with valid timeout */
if (!err && outarg.nodeid &&
- (invalid_nodeid(outarg.nodeid) || !valid_mode(outarg.attr.mode)))
+ (invalid_nodeid(outarg.nodeid) ||
+ !fuse_valid_type(outarg.attr.mode)))
err = -EIO;
if (!err && outarg.nodeid) {
inode = fuse_iget(dir->i_sb, outarg.nodeid, outarg.generation,
* Remove connection from control filesystem
*/
void fuse_ctl_remove_conn(struct fuse_conn *fc);
+
+/**
+ * Is file type valid?
+ */
+int fuse_valid_type(int m);
case OPT_ROOTMODE:
if (match_octal(&args[0], &value))
return 0;
+ if (!fuse_valid_type(value))
+ return 0;
d->rootmode = value;
d->rootmode_present = 1;
break;
#include "hostfs.h"
#include "kern_util.h"
#include "kern.h"
-#include "user_util.h"
#include "init.h"
struct hostfs_inode_info {
static int hostfs_fill_sb_common(struct super_block *sb, void *d, int silent)
{
struct inode *root_inode;
- char *name, *data = d;
+ char *host_root_path, *req_root = d;
int err;
sb->s_blocksize = 1024;
sb->s_op = &hostfs_sbops;
/* NULL is printed as <NULL> by sprintf: avoid that. */
- if (data == NULL)
- data = "";
+ if (req_root == NULL)
+ req_root = "";
err = -ENOMEM;
- name = kmalloc(strlen(root_ino) + 1
- + strlen(data) + 1, GFP_KERNEL);
- if(name == NULL)
+ host_root_path = kmalloc(strlen(root_ino) + 1
+ + strlen(req_root) + 1, GFP_KERNEL);
+ if(host_root_path == NULL)
goto out;
- sprintf(name, "%s/%s", root_ino, data);
+ sprintf(host_root_path, "%s/%s", root_ino, req_root);
root_inode = iget(sb, 0);
if(root_inode == NULL)
if(err)
goto out_put;
- HOSTFS_I(root_inode)->host_filename = name;
- /* Avoid that in the error path, iput(root_inode) frees again name through
- * hostfs_destroy_inode! */
- name = NULL;
+ HOSTFS_I(root_inode)->host_filename = host_root_path;
+ /* Avoid that in the error path, iput(root_inode) frees again
+ * host_root_path through hostfs_destroy_inode! */
+ host_root_path = NULL;
err = -ENOMEM;
sb->s_root = d_alloc_root(root_inode);
out_put:
iput(root_inode);
out_free:
- kfree(name);
+ kfree(host_root_path);
out:
return(err);
}
* ... prune child dentries and writebacks if needed.
*/
if (atomic_read(&old_dentry->d_count) > 1) {
- nfs_wb_all(old_inode);
+ if (S_ISREG(old_inode->i_mode))
+ nfs_wb_all(old_inode);
shrink_dcache_parent(old_dentry);
}
nfs_inode_return_delegation(old_inode);
if (NFS_PROTO(data->inode)->commit_done(task, data) != 0)
return;
if (unlikely(task->tk_status < 0)) {
- dreq->error = task->tk_status;
+ dprintk("NFS: %5u commit failed with error %d.\n",
+ task->tk_pid, task->tk_status);
dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
- }
- if (memcmp(&dreq->verf, &data->verf, sizeof(data->verf))) {
+ } else if (memcmp(&dreq->verf, &data->verf, sizeof(data->verf))) {
dprintk("NFS: %5u commit verify failed\n", task->tk_pid);
dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
}
spin_lock(&dreq->lock);
+ if (unlikely(dreq->error != 0))
+ goto out_unlock;
if (unlikely(status < 0)) {
+ /* An error has occured, so we should not commit */
+ dreq->flags = 0;
dreq->error = status;
- goto out_unlock;
}
dreq->count += data->res.count;
lock_kernel();
nfs_begin_data_update(inode);
/* Write all dirty data */
- filemap_write_and_wait(inode->i_mapping);
- nfs_wb_all(inode);
+ if (S_ISREG(inode->i_mode)) {
+ filemap_write_and_wait(inode->i_mapping);
+ nfs_wb_all(inode);
+ }
/*
* Return any delegations if we're going to change ACLs
*/
int err;
/* Flush out writes to the server in order to update c/mtime */
- nfs_sync_mapping_range(inode->i_mapping, 0, 0, FLUSH_NOCOMMIT);
+ if (S_ISREG(inode->i_mode))
+ nfs_sync_mapping_range(inode->i_mapping, 0, 0, FLUSH_NOCOMMIT);
/*
* We may force a getattr if the user cares about atime.
if (ret < 0)
goto error_0;
-#ifdef CONFIG_NFS_V4
ret = nfs_register_sysctl();
if (ret < 0)
goto error_1;
+#ifdef CONFIG_NFS_V4
ret = register_filesystem(&nfs4_fs_type);
if (ret < 0)
goto error_2;
#ifdef CONFIG_NFS_V4
error_2:
nfs_unregister_sysctl();
+#endif
error_1:
unregister_filesystem(&nfs_fs_type);
-#endif
error_0:
return ret;
}
.proc_handler = &proc_dointvec_jiffies,
.strategy = &sysctl_jiffies,
},
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "nfs_congestion_kb",
+ .data = &nfs_congestion_kb,
+ .maxlen = sizeof(nfs_congestion_kb),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
{ .ctl_name = 0 }
};
#include <linux/pagemap.h>
#include <linux/file.h>
#include <linux/writeback.h>
+#include <linux/swap.h>
#include <linux/sunrpc/clnt.h>
#include <linux/nfs_fs.h>
static struct nfs_page * nfs_update_request(struct nfs_open_context*,
struct page *,
unsigned int, unsigned int);
-static void nfs_mark_request_dirty(struct nfs_page *req);
-static int nfs_wait_on_write_congestion(struct address_space *, int);
static long nfs_flush_mapping(struct address_space *mapping, struct writeback_control *wbc, int how);
static const struct rpc_call_ops nfs_write_partial_ops;
static const struct rpc_call_ops nfs_write_full_ops;
static mempool_t *nfs_wdata_mempool;
static mempool_t *nfs_commit_mempool;
-static DECLARE_WAIT_QUEUE_HEAD(nfs_write_congestion);
-
struct nfs_write_data *nfs_commit_alloc(void)
{
struct nfs_write_data *p = mempool_alloc(nfs_commit_mempool, GFP_NOFS);
return 0;
}
+/*
+ * NFS congestion control
+ */
+
+int nfs_congestion_kb;
+
+#define NFS_CONGESTION_ON_THRESH (nfs_congestion_kb >> (PAGE_SHIFT-10))
+#define NFS_CONGESTION_OFF_THRESH \
+ (NFS_CONGESTION_ON_THRESH - (NFS_CONGESTION_ON_THRESH >> 2))
+
+static int nfs_set_page_writeback(struct page *page)
+{
+ int ret = test_set_page_writeback(page);
+
+ if (!ret) {
+ struct inode *inode = page->mapping->host;
+ struct nfs_server *nfss = NFS_SERVER(inode);
+
+ if (atomic_inc_return(&nfss->writeback) >
+ NFS_CONGESTION_ON_THRESH)
+ set_bdi_congested(&nfss->backing_dev_info, WRITE);
+ }
+ return ret;
+}
+
+static void nfs_end_page_writeback(struct page *page)
+{
+ struct inode *inode = page->mapping->host;
+ struct nfs_server *nfss = NFS_SERVER(inode);
+
+ end_page_writeback(page);
+ if (atomic_dec_return(&nfss->writeback) < NFS_CONGESTION_OFF_THRESH) {
+ clear_bdi_congested(&nfss->backing_dev_info, WRITE);
+ congestion_end(WRITE);
+ }
+}
+
/*
* Find an associated nfs write request, and prepare to flush it out
* Returns 1 if there was no write request, or if the request was
static int nfs_page_mark_flush(struct page *page)
{
struct nfs_page *req;
- spinlock_t *req_lock = &NFS_I(page->mapping->host)->req_lock;
+ struct nfs_inode *nfsi = NFS_I(page->mapping->host);
+ spinlock_t *req_lock = &nfsi->req_lock;
int ret;
spin_lock(req_lock);
return ret;
spin_lock(req_lock);
}
- spin_unlock(req_lock);
- if (test_and_set_bit(PG_FLUSHING, &req->wb_flags) == 0) {
- nfs_mark_request_dirty(req);
- set_page_writeback(page);
+ if (test_bit(PG_NEED_COMMIT, &req->wb_flags)) {
+ /* This request is marked for commit */
+ spin_unlock(req_lock);
+ nfs_unlock_request(req);
+ return 1;
}
+ if (nfs_set_page_writeback(page) == 0) {
+ nfs_list_remove_request(req);
+ /* add the request to the inode's dirty list. */
+ radix_tree_tag_set(&nfsi->nfs_page_tree,
+ req->wb_index, NFS_PAGE_TAG_DIRTY);
+ nfs_list_add_request(req, &nfsi->dirty);
+ nfsi->ndirty++;
+ spin_unlock(req_lock);
+ __mark_inode_dirty(page->mapping->host, I_DIRTY_PAGES);
+ } else
+ spin_unlock(req_lock);
ret = test_bit(PG_NEED_FLUSH, &req->wb_flags);
nfs_unlock_request(req);
return ret;
return err;
}
-/*
- * Note: causes nfs_update_request() to block on the assumption
- * that the writeback is generated due to memory pressure.
- */
int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc)
{
- struct backing_dev_info *bdi = mapping->backing_dev_info;
struct inode *inode = mapping->host;
int err;
err = generic_writepages(mapping, wbc);
if (err)
return err;
- while (test_and_set_bit(BDI_write_congested, &bdi->state) != 0) {
- if (wbc->nonblocking)
- return 0;
- nfs_wait_on_write_congestion(mapping, 0);
- }
err = nfs_flush_mapping(mapping, wbc, wb_priority(wbc));
if (err < 0)
goto out;
nfs_add_stats(inode, NFSIOS_WRITEPAGES, err);
err = 0;
out:
- clear_bit(BDI_write_congested, &bdi->state);
- wake_up_all(&nfs_write_congestion);
- congestion_end(WRITE);
return err;
}
}
SetPagePrivate(req->wb_page);
set_page_private(req->wb_page, (unsigned long)req);
+ if (PageDirty(req->wb_page))
+ set_bit(PG_NEED_FLUSH, &req->wb_flags);
nfsi->npages++;
atomic_inc(&req->wb_count);
return 0;
}
/*
- * Insert a write request into an inode
+ * Remove a write request from an inode
*/
static void nfs_inode_remove_request(struct nfs_page *req)
{
set_page_private(req->wb_page, 0);
ClearPagePrivate(req->wb_page);
radix_tree_delete(&nfsi->nfs_page_tree, req->wb_index);
+ if (test_and_clear_bit(PG_NEED_FLUSH, &req->wb_flags))
+ __set_page_dirty_nobuffers(req->wb_page);
nfsi->npages--;
if (!nfsi->npages) {
spin_unlock(&nfsi->req_lock);
nfs_release_request(req);
}
-/*
- * Add a request to the inode's dirty list.
- */
-static void
-nfs_mark_request_dirty(struct nfs_page *req)
-{
- struct inode *inode = req->wb_context->dentry->d_inode;
- struct nfs_inode *nfsi = NFS_I(inode);
-
- spin_lock(&nfsi->req_lock);
- radix_tree_tag_set(&nfsi->nfs_page_tree,
- req->wb_index, NFS_PAGE_TAG_DIRTY);
- nfs_list_add_request(req, &nfsi->dirty);
- nfsi->ndirty++;
- spin_unlock(&nfsi->req_lock);
- __mark_inode_dirty(inode, I_DIRTY_PAGES);
-}
-
static void
nfs_redirty_request(struct nfs_page *req)
{
- clear_bit(PG_FLUSHING, &req->wb_flags);
__set_page_dirty_nobuffers(req->wb_page);
}
static inline int
nfs_dirty_request(struct nfs_page *req)
{
- return test_bit(PG_FLUSHING, &req->wb_flags) == 0;
+ struct page *page = req->wb_page;
+
+ if (page == NULL || test_bit(PG_NEED_COMMIT, &req->wb_flags))
+ return 0;
+ return !PageWriteback(req->wb_page);
}
#if defined(CONFIG_NFS_V3) || defined(CONFIG_NFS_V4)
spin_lock(&nfsi->req_lock);
nfs_list_add_request(req, &nfsi->commit);
nfsi->ncommit++;
+ set_bit(PG_NEED_COMMIT, &(req)->wb_flags);
spin_unlock(&nfsi->req_lock);
inc_zone_page_state(req->wb_page, NR_UNSTABLE_NFS);
__mark_inode_dirty(inode, I_DIRTY_DATASYNC);
}
+
+static inline
+int nfs_write_need_commit(struct nfs_write_data *data)
+{
+ return data->verf.committed != NFS_FILE_SYNC;
+}
+
+static inline
+int nfs_reschedule_unstable_write(struct nfs_page *req)
+{
+ if (test_bit(PG_NEED_COMMIT, &req->wb_flags)) {
+ nfs_mark_request_commit(req);
+ return 1;
+ }
+ if (test_and_clear_bit(PG_NEED_RESCHED, &req->wb_flags)) {
+ nfs_redirty_request(req);
+ return 1;
+ }
+ return 0;
+}
+#else
+static inline void
+nfs_mark_request_commit(struct nfs_page *req)
+{
+}
+
+static inline
+int nfs_write_need_commit(struct nfs_write_data *data)
+{
+ return 0;
+}
+
+static inline
+int nfs_reschedule_unstable_write(struct nfs_page *req)
+{
+ return 0;
+}
#endif
/*
while(!list_empty(head)) {
req = nfs_list_entry(head->next);
nfs_list_remove_request(req);
+ nfs_end_page_writeback(req->wb_page);
nfs_inode_remove_request(req);
nfs_clear_page_writeback(req);
}
req = nfs_list_entry(head->next);
dec_zone_page_state(req->wb_page, NR_UNSTABLE_NFS);
nfs_list_remove_request(req);
+ clear_bit(PG_NEED_COMMIT, &(req)->wb_flags);
nfs_inode_remove_request(req);
nfs_unlock_request(req);
}
}
#endif
-static int nfs_wait_on_write_congestion(struct address_space *mapping, int intr)
+static int nfs_wait_on_write_congestion(struct address_space *mapping)
{
+ struct inode *inode = mapping->host;
struct backing_dev_info *bdi = mapping->backing_dev_info;
- DEFINE_WAIT(wait);
int ret = 0;
might_sleep();
if (!bdi_write_congested(bdi))
return 0;
- nfs_inc_stats(mapping->host, NFSIOS_CONGESTIONWAIT);
+ nfs_inc_stats(inode, NFSIOS_CONGESTIONWAIT);
- if (intr) {
- struct rpc_clnt *clnt = NFS_CLIENT(mapping->host);
+ do {
+ struct rpc_clnt *clnt = NFS_CLIENT(inode);
sigset_t oldset;
rpc_clnt_sigmask(clnt, &oldset);
- prepare_to_wait(&nfs_write_congestion, &wait, TASK_INTERRUPTIBLE);
- if (bdi_write_congested(bdi)) {
- if (signalled())
- ret = -ERESTARTSYS;
- else
- schedule();
- }
+ ret = congestion_wait_interruptible(WRITE, HZ/10);
rpc_clnt_sigunmask(clnt, &oldset);
- } else {
- prepare_to_wait(&nfs_write_congestion, &wait, TASK_UNINTERRUPTIBLE);
- if (bdi_write_congested(bdi))
- schedule();
- }
- finish_wait(&nfs_write_congestion, &wait);
+ if (ret == -ERESTARTSYS)
+ break;
+ ret = 0;
+ } while (bdi_write_congested(bdi));
+
return ret;
}
-
/*
* Try to update any existing write request, or create one if there is none.
* In order to match, the request's credentials must match those of
static struct nfs_page * nfs_update_request(struct nfs_open_context* ctx,
struct page *page, unsigned int offset, unsigned int bytes)
{
- struct inode *inode = page->mapping->host;
+ struct address_space *mapping = page->mapping;
+ struct inode *inode = mapping->host;
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_page *req, *new = NULL;
unsigned long rqend, end;
end = offset + bytes;
- if (nfs_wait_on_write_congestion(page->mapping, NFS_SERVER(inode)->flags & NFS_MOUNT_INTR))
+ if (nfs_wait_on_write_congestion(mapping))
return ERR_PTR(-ERESTARTSYS);
for (;;) {
/* Loop over all inode entries and see if we find
static void nfs_writepage_release(struct nfs_page *req)
{
- end_page_writeback(req->wb_page);
-#if defined(CONFIG_NFS_V3) || defined(CONFIG_NFS_V4)
- if (!PageError(req->wb_page)) {
- if (NFS_NEED_RESCHED(req)) {
- nfs_redirty_request(req);
- goto out;
- } else if (NFS_NEED_COMMIT(req)) {
- nfs_mark_request_commit(req);
- goto out;
- }
- }
- nfs_inode_remove_request(req);
-
-out:
- nfs_clear_commit(req);
- nfs_clear_reschedule(req);
-#else
- nfs_inode_remove_request(req);
-#endif
+ if (PageError(req->wb_page) || !nfs_reschedule_unstable_write(req)) {
+ nfs_end_page_writeback(req->wb_page);
+ nfs_inode_remove_request(req);
+ } else
+ nfs_end_page_writeback(req->wb_page);
nfs_clear_page_writeback(req);
}
nfs_writedata_release(data);
}
nfs_redirty_request(req);
+ nfs_end_page_writeback(req->wb_page);
nfs_clear_page_writeback(req);
return -ENOMEM;
}
struct nfs_page *req = nfs_list_entry(head->next);
nfs_list_remove_request(req);
nfs_redirty_request(req);
+ nfs_end_page_writeback(req->wb_page);
nfs_clear_page_writeback(req);
}
return -ENOMEM;
req = nfs_list_entry(head->next);
nfs_list_remove_request(req);
nfs_redirty_request(req);
+ nfs_end_page_writeback(req->wb_page);
nfs_clear_page_writeback(req);
}
return error;
nfs_set_pageerror(page);
req->wb_context->error = task->tk_status;
dprintk(", error = %d\n", task->tk_status);
- } else {
-#if defined(CONFIG_NFS_V3) || defined(CONFIG_NFS_V4)
- if (data->verf.committed < NFS_FILE_SYNC) {
- if (!NFS_NEED_COMMIT(req)) {
- nfs_defer_commit(req);
- memcpy(&req->wb_verf, &data->verf, sizeof(req->wb_verf));
- dprintk(" defer commit\n");
- } else if (memcmp(&req->wb_verf, &data->verf, sizeof(req->wb_verf))) {
- nfs_defer_reschedule(req);
- dprintk(" server reboot detected\n");
- }
- } else
-#endif
- dprintk(" OK\n");
+ goto out;
}
+ if (nfs_write_need_commit(data)) {
+ spinlock_t *req_lock = &NFS_I(page->mapping->host)->req_lock;
+
+ spin_lock(req_lock);
+ if (test_bit(PG_NEED_RESCHED, &req->wb_flags)) {
+ /* Do nothing we need to resend the writes */
+ } else if (!test_and_set_bit(PG_NEED_COMMIT, &req->wb_flags)) {
+ memcpy(&req->wb_verf, &data->verf, sizeof(req->wb_verf));
+ dprintk(" defer commit\n");
+ } else if (memcmp(&req->wb_verf, &data->verf, sizeof(req->wb_verf))) {
+ set_bit(PG_NEED_RESCHED, &req->wb_flags);
+ clear_bit(PG_NEED_COMMIT, &req->wb_flags);
+ dprintk(" server reboot detected\n");
+ }
+ spin_unlock(req_lock);
+ } else
+ dprintk(" OK\n");
+
+out:
if (atomic_dec_and_test(&req->wb_complete))
nfs_writepage_release(req);
}
if (task->tk_status < 0) {
nfs_set_pageerror(page);
req->wb_context->error = task->tk_status;
- end_page_writeback(page);
- nfs_inode_remove_request(req);
dprintk(", error = %d\n", task->tk_status);
- goto next;
+ goto remove_request;
}
- end_page_writeback(page);
-#if defined(CONFIG_NFS_V3) || defined(CONFIG_NFS_V4)
- if (data->args.stable != NFS_UNSTABLE || data->verf.committed == NFS_FILE_SYNC) {
- nfs_inode_remove_request(req);
- dprintk(" OK\n");
+ if (nfs_write_need_commit(data)) {
+ memcpy(&req->wb_verf, &data->verf, sizeof(req->wb_verf));
+ nfs_mark_request_commit(req);
+ nfs_end_page_writeback(page);
+ dprintk(" marked for commit\n");
goto next;
}
- memcpy(&req->wb_verf, &data->verf, sizeof(req->wb_verf));
- nfs_mark_request_commit(req);
- dprintk(" marked for commit\n");
-#else
+ dprintk(" OK\n");
+remove_request:
+ nfs_end_page_writeback(page);
nfs_inode_remove_request(req);
-#endif
next:
nfs_clear_page_writeback(req);
}
while (!list_empty(&data->pages)) {
req = nfs_list_entry(data->pages.next);
nfs_list_remove_request(req);
+ clear_bit(PG_NEED_COMMIT, &(req)->wb_flags);
dec_zone_page_state(req->wb_page, NR_UNSTABLE_NFS);
dprintk("NFS: commit (%s/%Ld %d@%Ld)",
int nfs_set_page_dirty(struct page *page)
{
+ spinlock_t *req_lock = &NFS_I(page->mapping->host)->req_lock;
struct nfs_page *req;
+ int ret;
- req = nfs_page_find_request(page);
+ spin_lock(req_lock);
+ req = nfs_page_find_request_locked(page);
if (req != NULL) {
/* Mark any existing write requests for flushing */
- set_bit(PG_NEED_FLUSH, &req->wb_flags);
+ ret = !test_and_set_bit(PG_NEED_FLUSH, &req->wb_flags);
+ spin_unlock(req_lock);
nfs_release_request(req);
+ return ret;
}
- return __set_page_dirty_nobuffers(page);
+ ret = __set_page_dirty_nobuffers(page);
+ spin_unlock(req_lock);
+ return ret;
}
if (nfs_commit_mempool == NULL)
return -ENOMEM;
+ /*
+ * NFS congestion size, scale with available memory.
+ *
+ * 64MB: 8192k
+ * 128MB: 11585k
+ * 256MB: 16384k
+ * 512MB: 23170k
+ * 1GB: 32768k
+ * 2GB: 46340k
+ * 4GB: 65536k
+ * 8GB: 92681k
+ * 16GB: 131072k
+ *
+ * This allows larger machines to have larger/more transfers.
+ * Limit the default to 256M
+ */
+ nfs_congestion_kb = (16*int_sqrt(totalram_pages)) << (PAGE_SHIFT-10);
+ if (nfs_congestion_kb > 256*1024)
+ nfs_congestion_kb = 256*1024;
+
return 0;
}
#define NFS3_ENTRY_BAGGAGE (2 + 1 + 2 + 1)
#define NFS3_ENTRYPLUS_BAGGAGE (1 + 21 + 1 + (NFS3_FHSIZE >> 2))
static int
-encode_entry(struct readdir_cd *ccd, const char *name,
- int namlen, off_t offset, ino_t ino, unsigned int d_type, int plus)
+encode_entry(struct readdir_cd *ccd, const char *name, int namlen,
+ loff_t offset, ino_t ino, unsigned int d_type, int plus)
{
struct nfsd3_readdirres *cd = container_of(ccd, struct nfsd3_readdirres,
common);
*cd->offset1 = htonl(offset64 & 0xffffffff);
cd->offset1 = NULL;
} else {
- xdr_encode_hyper(cd->offset, (u64) offset);
+ xdr_encode_hyper(cd->offset, offset64);
}
}
struct posix_acl_summary pas;
unsigned short deny;
int eflag = ((flags & NFS4_ACL_TYPE_DEFAULT) ?
- NFS4_INHERITANCE_FLAGS : 0);
+ NFS4_INHERITANCE_FLAGS | NFS4_ACE_INHERIT_ONLY_ACE : 0);
BUG_ON(pacl->a_count < 3);
summarize_posix_acl(pacl, &pas);
status = nfserr_clid_inuse;
if (!cmp_creds(&conf->cl_cred, &rqstp->rq_cred)
|| conf->cl_addr != sin->sin_addr.s_addr) {
- printk("NFSD: setclientid: string in use by client"
- "(clientid %08x/%08x)\n",
- conf->cl_clientid.cl_boot, conf->cl_clientid.cl_id);
+ dprintk("NFSD: setclientid: string in use by client"
+ "at %u.%u.%u.%u\n", NIPQUAD(conf->cl_addr));
goto out;
}
}
unhash_delegation(dp);
}
- cancel_delayed_work(&laundromat_work);
nfsd4_shutdown_recdir();
nfs4_init = 0;
}
#include <linux/stat.h>
#include <linux/dcache.h>
#include <linux/mount.h>
-#include <asm/pgtable.h>
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/svc.h>
ocfs2_rw_unlock(inode, 0);
}
+/*
+ * ocfs2_invalidatepage() and ocfs2_releasepage() are shamelessly stolen
+ * from ext3. PageChecked() bits have been removed as OCFS2 does not
+ * do journalled data.
+ */
+static void ocfs2_invalidatepage(struct page *page, unsigned long offset)
+{
+ journal_t *journal = OCFS2_SB(page->mapping->host->i_sb)->journal->j_journal;
+
+ journal_invalidatepage(journal, page, offset);
+}
+
+static int ocfs2_releasepage(struct page *page, gfp_t wait)
+{
+ journal_t *journal = OCFS2_SB(page->mapping->host->i_sb)->journal->j_journal;
+
+ if (!page_has_buffers(page))
+ return 0;
+ return journal_try_to_free_buffers(journal, page, wait);
+}
+
static ssize_t ocfs2_direct_IO(int rw,
struct kiocb *iocb,
const struct iovec *iov,
.commit_write = ocfs2_commit_write,
.bmap = ocfs2_bmap,
.sync_page = block_sync_page,
- .direct_IO = ocfs2_direct_IO
+ .direct_IO = ocfs2_direct_IO,
+ .invalidatepage = ocfs2_invalidatepage,
+ .releasepage = ocfs2_releasepage,
+ .migratepage = buffer_migrate_page,
};
const char *page,
size_t count)
{
+ struct task_struct *hb_task;
long fd;
int sectsize;
char *p = (char *)page;
*/
atomic_set(®->hr_steady_iterations, O2HB_LIVE_THRESHOLD + 1);
- reg->hr_task = kthread_run(o2hb_thread, reg, "o2hb-%s",
- reg->hr_item.ci_name);
- if (IS_ERR(reg->hr_task)) {
- ret = PTR_ERR(reg->hr_task);
+ hb_task = kthread_run(o2hb_thread, reg, "o2hb-%s",
+ reg->hr_item.ci_name);
+ if (IS_ERR(hb_task)) {
+ ret = PTR_ERR(hb_task);
mlog_errno(ret);
- reg->hr_task = NULL;
goto out;
}
+ spin_lock(&o2hb_live_lock);
+ reg->hr_task = hb_task;
+ spin_unlock(&o2hb_live_lock);
+
ret = wait_event_interruptible(o2hb_steady_queue,
atomic_read(®->hr_steady_iterations) == 0);
if (ret) {
- kthread_stop(reg->hr_task);
+ spin_lock(&o2hb_live_lock);
+ hb_task = reg->hr_task;
reg->hr_task = NULL;
+ spin_unlock(&o2hb_live_lock);
+
+ if (hb_task)
+ kthread_stop(hb_task);
goto out;
}
static ssize_t o2hb_region_pid_read(struct o2hb_region *reg,
char *page)
{
- if (!reg->hr_task)
+ pid_t pid = 0;
+
+ spin_lock(&o2hb_live_lock);
+ if (reg->hr_task)
+ pid = reg->hr_task->pid;
+ spin_unlock(&o2hb_live_lock);
+
+ if (!pid)
return 0;
- return sprintf(page, "%u\n", reg->hr_task->pid);
+ return sprintf(page, "%u\n", pid);
}
struct o2hb_region_attribute {
static void o2hb_heartbeat_group_drop_item(struct config_group *group,
struct config_item *item)
{
+ struct task_struct *hb_task;
struct o2hb_region *reg = to_o2hb_region(item);
/* stop the thread when the user removes the region dir */
- if (reg->hr_task) {
- kthread_stop(reg->hr_task);
- reg->hr_task = NULL;
- }
+ spin_lock(&o2hb_live_lock);
+ hb_task = reg->hr_task;
+ reg->hr_task = NULL;
+ spin_unlock(&o2hb_live_lock);
+
+ if (hb_task)
+ kthread_stop(hb_task);
config_item_put(item);
}
}
EXPORT_SYMBOL_GPL(o2hb_register_callback);
-int o2hb_unregister_callback(struct o2hb_callback_func *hc)
+void o2hb_unregister_callback(struct o2hb_callback_func *hc)
{
BUG_ON(hc->hc_magic != O2HB_CB_MAGIC);
__builtin_return_address(0), hc);
if (list_empty(&hc->hc_item))
- return 0;
+ return;
down_write(&o2hb_callback_sem);
list_del_init(&hc->hc_item);
up_write(&o2hb_callback_sem);
-
- return 0;
}
EXPORT_SYMBOL_GPL(o2hb_unregister_callback);
void *data,
int priority);
int o2hb_register_callback(struct o2hb_callback_func *hc);
-int o2hb_unregister_callback(struct o2hb_callback_func *hc);
+void o2hb_unregister_callback(struct o2hb_callback_func *hc);
void o2hb_fill_node_map(unsigned long *map,
unsigned bytes);
void o2hb_init(void);
void o2net_unregister_hb_callbacks(void)
{
- int ret;
-
- ret = o2hb_unregister_callback(&o2net_hb_up);
- if (ret < 0)
- mlog(ML_ERROR, "Status return %d unregistering heartbeat up "
- "callback!\n", ret);
-
- ret = o2hb_unregister_callback(&o2net_hb_down);
- if (ret < 0)
- mlog(ML_ERROR, "Status return %d unregistering heartbeat down "
- "callback!\n", ret);
+ o2hb_unregister_callback(&o2net_hb_up);
+ o2hb_unregister_callback(&o2net_hb_down);
}
int o2net_register_hb_callbacks(void)
void __dlm_unhash_lockres(struct dlm_lock_resource *lockres)
{
- hlist_del_init(&lockres->hash_node);
- dlm_lockres_put(lockres);
+ if (!hlist_unhashed(&lockres->hash_node)) {
+ hlist_del_init(&lockres->hash_node);
+ dlm_lockres_put(lockres);
+ }
}
void __dlm_insert_lockres(struct dlm_ctxt *dlm,
dlm_kick_thread(dlm, NULL);
while (dlm_migrate_all_locks(dlm)) {
+ /* Give dlm_thread time to purge the lockres' */
+ msleep(500);
mlog(0, "%s: more migration to do\n", dlm->name);
}
dlm_mark_domain_leaving(dlm);
dlm_lockres_put(res);
}
+/* Checks whether the lockres can be migrated. Returns 0 if yes, < 0
+ * if not. If 0, numlocks is set to the number of locks in the lockres.
+ */
+static int dlm_is_lockres_migrateable(struct dlm_ctxt *dlm,
+ struct dlm_lock_resource *res,
+ int *numlocks)
+{
+ int ret;
+ int i;
+ int count = 0;
+ struct list_head *queue, *iter;
+ struct dlm_lock *lock;
+
+ assert_spin_locked(&res->spinlock);
+
+ ret = -EINVAL;
+ if (res->owner == DLM_LOCK_RES_OWNER_UNKNOWN) {
+ mlog(0, "cannot migrate lockres with unknown owner!\n");
+ goto leave;
+ }
+
+ if (res->owner != dlm->node_num) {
+ mlog(0, "cannot migrate lockres this node doesn't own!\n");
+ goto leave;
+ }
+
+ ret = 0;
+ queue = &res->granted;
+ for (i = 0; i < 3; i++) {
+ list_for_each(iter, queue) {
+ lock = list_entry(iter, struct dlm_lock, list);
+ ++count;
+ if (lock->ml.node == dlm->node_num) {
+ mlog(0, "found a lock owned by this node still "
+ "on the %s queue! will not migrate this "
+ "lockres\n", (i == 0 ? "granted" :
+ (i == 1 ? "converting" :
+ "blocked")));
+ ret = -ENOTEMPTY;
+ goto leave;
+ }
+ }
+ queue++;
+ }
+
+ *numlocks = count;
+ mlog(0, "migrateable lockres having %d locks\n", *numlocks);
+
+leave:
+ return ret;
+}
/*
* DLM_MIGRATE_LOCKRES
struct dlm_master_list_entry *mle = NULL;
struct dlm_master_list_entry *oldmle = NULL;
struct dlm_migratable_lockres *mres = NULL;
- int ret = -EINVAL;
+ int ret = 0;
const char *name;
unsigned int namelen;
int mle_added = 0;
- struct list_head *queue, *iter;
- int i;
- struct dlm_lock *lock;
- int empty = 1, wake = 0;
+ int numlocks;
+ int wake = 0;
if (!dlm_grab(dlm))
return -EINVAL;
* ensure this lockres is a proper candidate for migration
*/
spin_lock(&res->spinlock);
- if (res->owner == DLM_LOCK_RES_OWNER_UNKNOWN) {
- mlog(0, "cannot migrate lockres with unknown owner!\n");
- spin_unlock(&res->spinlock);
- goto leave;
- }
- if (res->owner != dlm->node_num) {
- mlog(0, "cannot migrate lockres this node doesn't own!\n");
+ ret = dlm_is_lockres_migrateable(dlm, res, &numlocks);
+ if (ret < 0) {
spin_unlock(&res->spinlock);
goto leave;
}
- mlog(0, "checking queues...\n");
- queue = &res->granted;
- for (i=0; i<3; i++) {
- list_for_each(iter, queue) {
- lock = list_entry (iter, struct dlm_lock, list);
- empty = 0;
- if (lock->ml.node == dlm->node_num) {
- mlog(0, "found a lock owned by this node "
- "still on the %s queue! will not "
- "migrate this lockres\n",
- i==0 ? "granted" :
- (i==1 ? "converting" : "blocked"));
- spin_unlock(&res->spinlock);
- ret = -ENOTEMPTY;
- goto leave;
- }
- }
- queue++;
- }
- mlog(0, "all locks on this lockres are nonlocal. continuing\n");
spin_unlock(&res->spinlock);
/* no work to do */
- if (empty) {
+ if (numlocks == 0) {
mlog(0, "no locks were found on this lockres! done!\n");
- ret = 0;
goto leave;
}
{
int ret;
int lock_dropped = 0;
+ int numlocks;
+ spin_lock(&res->spinlock);
if (res->owner != dlm->node_num) {
if (!__dlm_lockres_unused(res)) {
mlog(ML_ERROR, "%s:%.*s: this node is not master, "
"trying to free this but locks remain\n",
dlm->name, res->lockname.len, res->lockname.name);
}
+ spin_unlock(&res->spinlock);
+ goto leave;
+ }
+
+ /* No need to migrate a lockres having no locks */
+ ret = dlm_is_lockres_migrateable(dlm, res, &numlocks);
+ if (ret >= 0 && numlocks == 0) {
+ spin_unlock(&res->spinlock);
goto leave;
}
+ spin_unlock(&res->spinlock);
/* Wheee! Migrate lockres here! Will sleep so drop spinlock. */
spin_unlock(&dlm->spinlock);
break;
}
- mlog(0, "removing lockres %.*s:%p from purgelist\n",
- lockres->lockname.len, lockres->lockname.name, lockres);
- list_del_init(&lockres->purge);
- dlm_lockres_put(lockres);
- dlm->purge_count--;
+ dlm_lockres_get(lockres);
/* This may drop and reacquire the dlm spinlock if it
* has to do migration. */
- mlog(0, "calling dlm_purge_lockres!\n");
if (dlm_purge_lockres(dlm, lockres))
BUG();
- mlog(0, "DONE calling dlm_purge_lockres!\n");
+
+ dlm_lockres_put(lockres);
/* Avoid adding any scheduling latencies */
cond_resched_lock(&dlm->spinlock);
}
status = o2hb_register_callback(&osb->osb_hb_up);
- if (status < 0)
+ if (status < 0) {
mlog_errno(status);
+ o2hb_unregister_callback(&osb->osb_hb_down);
+ }
bail:
return status;
void ocfs2_clear_hb_callbacks(struct ocfs2_super *osb)
{
- int status;
-
if (ocfs2_mount_local(osb))
return;
- status = o2hb_unregister_callback(&osb->osb_hb_down);
- if (status < 0)
- mlog_errno(status);
-
- status = o2hb_unregister_callback(&osb->osb_hb_up);
- if (status < 0)
- mlog_errno(status);
+ o2hb_unregister_callback(&osb->osb_hb_down);
+ o2hb_unregister_callback(&osb->osb_hb_up);
}
void ocfs2_stop_heartbeat(struct ocfs2_super *osb)
select CRC32
help
Say Y here if you would like to use hard disks under Linux which
- were partitioned using EFI GPT. Presently only useful on the
- IA-64 platform.
+ were partitioned using EFI GPT.
if (!get_capacity(disk) || !(state = check_partition(disk, bdev)))
return 0;
if (IS_ERR(state)) /* I/O error reading the partition table */
- return PTR_ERR(state);
+ return -EIO;
for (p = 1; p < state->limit; p++) {
sector_t size = state->parts[p].size;
sector_t from = state->parts[p].from;
proc-$(CONFIG_MMU) := mmu.o task_mmu.o
proc-y += inode.o root.o base.o generic.o array.o \
- proc_tty.o proc_misc.o proc_sysctl.o
+ proc_tty.o proc_misc.o
+proc-$(CONFIG_PROC_SYSCTL) += proc_sysctl.o
proc-$(CONFIG_PROC_KCORE) += kcore.o
proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PROC_DEVICETREE) += proc_devtree.o
size_t count, loff_t *ppos)
{
struct inode * inode = file->f_path.dentry->d_inode;
- unsigned long page;
+ char *p = NULL;
ssize_t length;
struct task_struct *task = get_proc_task(inode);
- length = -ESRCH;
if (!task)
- goto out_no_task;
-
- if (count > PAGE_SIZE)
- count = PAGE_SIZE;
- length = -ENOMEM;
- if (!(page = __get_free_page(GFP_KERNEL)))
- goto out;
+ return -ESRCH;
length = security_getprocattr(task,
(char*)file->f_path.dentry->d_name.name,
- (void*)page, count);
- if (length >= 0)
- length = simple_read_from_buffer(buf, count, ppos, (char *)page, length);
- free_page(page);
-out:
+ &p);
put_task_struct(task);
-out_no_task:
+ if (length > 0)
+ length = simple_read_from_buffer(buf, count, ppos, p, length);
+ kfree(p);
return length;
}
#include <linux/proc_fs.h>
+#ifdef CONFIG_PROC_SYSCTL
extern int proc_sys_init(void);
+#else
+static inline void proc_sys_init(void) { }
+#endif
struct vmalloc_info {
unsigned long used;
proc_device_tree_init();
#endif
proc_bus = proc_mkdir("bus", NULL);
-#ifdef CONFIG_SYSCTL
proc_sys_init();
-#endif
}
static int proc_root_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat
{
key->on_disk_key.k_objectid--;
set_cpu_key_k_type(key, TYPE_ANY);
- set_cpu_key_k_offset(key, (loff_t) (-1));
+ set_cpu_key_k_offset(key, (loff_t)(~0ULL >> 1));
}
static int sd_is_left_mergeable(struct reiserfs_key *key, unsigned long bsize)
static struct reiserfs_xattr_handler *find_xattr_handler_prefix(const char
*prefix);
-static struct dentry *create_xa_root(struct super_block *sb)
+/* Returns the dentry referring to the root of the extended attribute
+ * directory tree. If it has already been retrieved, it is used. If it
+ * hasn't been created and the flags indicate creation is allowed, we
+ * attempt to create it. On error, we return a pointer-encoded error.
+ */
+static struct dentry *get_xa_root(struct super_block *sb, int flags)
{
struct dentry *privroot = dget(REISERFS_SB(sb)->priv_root);
struct dentry *xaroot;
/* This needs to be created at mount-time */
if (!privroot)
- return ERR_PTR(-EOPNOTSUPP);
+ return ERR_PTR(-ENODATA);
- xaroot = lookup_one_len(XAROOT_NAME, privroot, strlen(XAROOT_NAME));
- if (IS_ERR(xaroot)) {
+ mutex_lock(&privroot->d_inode->i_mutex);
+ if (REISERFS_SB(sb)->xattr_root) {
+ xaroot = dget(REISERFS_SB(sb)->xattr_root);
goto out;
- } else if (!xaroot->d_inode) {
- int err;
- mutex_lock(&privroot->d_inode->i_mutex);
- err =
- privroot->d_inode->i_op->mkdir(privroot->d_inode, xaroot,
- 0700);
- mutex_unlock(&privroot->d_inode->i_mutex);
-
- if (err) {
- dput(xaroot);
- dput(privroot);
- return ERR_PTR(err);
- }
- REISERFS_SB(sb)->xattr_root = dget(xaroot);
}
- out:
- dput(privroot);
- return xaroot;
-}
-
-/* This will return a dentry, or error, refering to the xa root directory.
- * If the xa root doesn't exist yet, the dentry will be returned without
- * an associated inode. This dentry can be used with ->mkdir to create
- * the xa directory. */
-static struct dentry *__get_xa_root(struct super_block *s)
-{
- struct dentry *privroot = dget(REISERFS_SB(s)->priv_root);
- struct dentry *xaroot = NULL;
-
- if (IS_ERR(privroot) || !privroot)
- return privroot;
-
xaroot = lookup_one_len(XAROOT_NAME, privroot, strlen(XAROOT_NAME));
if (IS_ERR(xaroot)) {
goto out;
} else if (!xaroot->d_inode) {
- dput(xaroot);
- xaroot = NULL;
- goto out;
+ int err = -ENODATA;
+ if (flags == 0 || flags & XATTR_CREATE)
+ err = privroot->d_inode->i_op->mkdir(privroot->d_inode,
+ xaroot, 0700);
+ if (err) {
+ dput(xaroot);
+ xaroot = ERR_PTR(err);
+ goto out;
+ }
}
-
- REISERFS_SB(s)->xattr_root = dget(xaroot);
+ REISERFS_SB(sb)->xattr_root = dget(xaroot);
out:
+ mutex_unlock(&privroot->d_inode->i_mutex);
dput(privroot);
return xaroot;
}
-/* Returns the dentry (or NULL) referring to the root of the extended
- * attribute directory tree. If it has already been retrieved, it is used.
- * Otherwise, we attempt to retrieve it from disk. It may also return
- * a pointer-encoded error.
- */
-static inline struct dentry *get_xa_root(struct super_block *s)
-{
- struct dentry *dentry = dget(REISERFS_SB(s)->xattr_root);
-
- if (!dentry)
- dentry = __get_xa_root(s);
-
- return dentry;
-}
-
/* Opens the directory corresponding to the inode's extended attribute store.
* If flags allow, the tree to the directory may be created. If creation is
* prohibited, -ENODATA is returned. */
struct dentry *xaroot, *xadir;
char namebuf[17];
- xaroot = get_xa_root(inode->i_sb);
- if (IS_ERR(xaroot)) {
+ xaroot = get_xa_root(inode->i_sb, flags);
+ if (IS_ERR(xaroot))
return xaroot;
- } else if (!xaroot) {
- if (flags == 0 || flags & XATTR_CREATE) {
- xaroot = create_xa_root(inode->i_sb);
- if (IS_ERR(xaroot))
- return xaroot;
- }
- if (!xaroot)
- return ERR_PTR(-ENODATA);
- }
/* ok, we have xaroot open */
-
snprintf(namebuf, sizeof(namebuf), "%X.%X",
le32_to_cpu(INODE_PKEY(inode)->k_objectid),
inode->i_generation);
/* Leftovers besides . and .. -- that's not good. */
if (dir->d_inode->i_nlink <= 2) {
- root = get_xa_root(inode->i_sb);
+ root = get_xa_root(inode->i_sb, XATTR_REPLACE);
reiserfs_write_lock_xattrs(inode->i_sb);
err = vfs_rmdir(root->d_inode, dir);
reiserfs_write_unlock_xattrs(inode->i_sb);
req->rq_errno = 0;
req->rq_fragment = 0;
kfree(req->rq_trans2buffer);
+ req->rq_trans2buffer = NULL;
return 0;
}
if (this_len + offset > PAGE_CACHE_SIZE)
this_len = PAGE_CACHE_SIZE - offset;
- /*
- * Reuse buf page, if SPLICE_F_MOVE is set and we are doing a full
- * page.
- */
- if ((sd->flags & SPLICE_F_MOVE) && this_len == PAGE_CACHE_SIZE) {
- /*
- * If steal succeeds, buf->page is now pruned from the
- * pagecache and we can reuse it. The page will also be
- * locked on successful return.
- */
- if (buf->ops->steal(pipe, buf))
- goto find_page;
-
- page = buf->page;
- if (add_to_page_cache(page, mapping, index, GFP_KERNEL)) {
- unlock_page(page);
- goto find_page;
- }
-
- page_cache_get(page);
-
- if (!(buf->flags & PIPE_BUF_FLAG_LRU))
- lru_cache_add(page);
- } else {
find_page:
- page = find_lock_page(mapping, index);
- if (!page) {
- ret = -ENOMEM;
- page = page_cache_alloc_cold(mapping);
- if (unlikely(!page))
- goto out_ret;
-
- /*
- * This will also lock the page
- */
- ret = add_to_page_cache_lru(page, mapping, index,
- GFP_KERNEL);
- if (unlikely(ret))
- goto out;
- }
+ page = find_lock_page(mapping, index);
+ if (!page) {
+ ret = -ENOMEM;
+ page = page_cache_alloc_cold(mapping);
+ if (unlikely(!page))
+ goto out_ret;
/*
- * We get here with the page locked. If the page is also
- * uptodate, we don't need to do more. If it isn't, we
- * may need to bring it in if we are not going to overwrite
- * the full page.
+ * This will also lock the page
*/
- if (!PageUptodate(page)) {
- if (this_len < PAGE_CACHE_SIZE) {
- ret = mapping->a_ops->readpage(file, page);
- if (unlikely(ret))
- goto out;
-
- lock_page(page);
-
- if (!PageUptodate(page)) {
- /*
- * Page got invalidated, repeat.
- */
- if (!page->mapping) {
- unlock_page(page);
- page_cache_release(page);
- goto find_page;
- }
- ret = -EIO;
- goto out;
- }
- } else
- SetPageUptodate(page);
- }
+ ret = add_to_page_cache_lru(page, mapping, index,
+ GFP_KERNEL);
+ if (unlikely(ret))
+ goto out;
}
ret = mapping->a_ops->prepare_write(file, page, offset, offset+this_len);
}
ret = mapping->a_ops->commit_write(file, page, offset, offset+this_len);
- if (!ret) {
+ if (ret) {
+ if (ret == AOP_TRUNCATED_PAGE) {
+ page_cache_release(page);
+ goto find_page;
+ }
+ if (ret < 0)
+ goto out;
/*
- * Return the number of bytes written and mark page as
- * accessed, we are now done!
+ * Partial write has happened, so 'ret' already initialized by
+ * number of bytes written, Where is nothing we have to do here.
*/
+ } else
ret = this_len;
- mark_page_accessed(page);
- balance_dirty_pages_ratelimited(mapping);
- } else if (ret == AOP_TRUNCATED_PAGE) {
- page_cache_release(page);
- goto find_page;
- }
+ /*
+ * Return the number of bytes written and mark page as
+ * accessed, we are now done!
+ */
+ mark_page_accessed(page);
+ balance_dirty_pages_ratelimited(mapping);
out:
page_cache_release(page);
unlock_page(page);
* key here is the 'actor' worker passed in that actually moves the data
* to the wanted destination. See pipe_to_file/pipe_to_sendpage above.
*/
-static ssize_t __splice_from_pipe(struct pipe_inode_info *pipe,
- struct file *out, loff_t *ppos, size_t len,
- unsigned int flags, splice_actor *actor)
+ssize_t __splice_from_pipe(struct pipe_inode_info *pipe,
+ struct file *out, loff_t *ppos, size_t len,
+ unsigned int flags, splice_actor *actor)
{
int ret, do_wakeup, err;
struct splice_desc sd;
return ret;
}
+EXPORT_SYMBOL(__splice_from_pipe);
ssize_t splice_from_pipe(struct pipe_inode_info *pipe, struct file *out,
loff_t *ppos, size_t len, unsigned int flags,
ssize_t retval = 0;
down(&buffer->sem);
- if (buffer->orphaned) {
- retval = -ENODEV;
- goto out;
- }
if (buffer->needs_read_fill) {
- if ((retval = fill_read_buffer(file->f_path.dentry,buffer)))
+ if (buffer->orphaned)
+ retval = -ENODEV;
+ else
+ retval = fill_read_buffer(file->f_path.dentry,buffer);
+ if (retval)
goto out;
}
pr_debug("%s: count = %zd, ppos = %lld, buf = %s\n",
}
EXPORT_SYMBOL_GPL(sysfs_remove_file_from_group);
+struct sysfs_schedule_callback_struct {
+ struct kobject *kobj;
+ void (*func)(void *);
+ void *data;
+ struct work_struct work;
+};
+
+static void sysfs_schedule_callback_work(struct work_struct *work)
+{
+ struct sysfs_schedule_callback_struct *ss = container_of(work,
+ struct sysfs_schedule_callback_struct, work);
+
+ (ss->func)(ss->data);
+ kobject_put(ss->kobj);
+ kfree(ss);
+}
+
+/**
+ * sysfs_schedule_callback - helper to schedule a callback for a kobject
+ * @kobj: object we're acting for.
+ * @func: callback function to invoke later.
+ * @data: argument to pass to @func.
+ *
+ * sysfs attribute methods must not unregister themselves or their parent
+ * kobject (which would amount to the same thing). Attempts to do so will
+ * deadlock, since unregistration is mutually exclusive with driver
+ * callbacks.
+ *
+ * Instead methods can call this routine, which will attempt to allocate
+ * and schedule a workqueue request to call back @func with @data as its
+ * argument in the workqueue's process context. @kobj will be pinned
+ * until @func returns.
+ *
+ * Returns 0 if the request was submitted, -ENOMEM if storage could not
+ * be allocated.
+ */
+int sysfs_schedule_callback(struct kobject *kobj, void (*func)(void *),
+ void *data)
+{
+ struct sysfs_schedule_callback_struct *ss;
+
+ ss = kmalloc(sizeof(*ss), GFP_KERNEL);
+ if (!ss)
+ return -ENOMEM;
+ kobject_get(kobj);
+ ss->kobj = kobj;
+ ss->func = func;
+ ss->data = data;
+ INIT_WORK(&ss->work, sysfs_schedule_callback_work);
+ schedule_work(&ss->work);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(sysfs_schedule_callback);
+
EXPORT_SYMBOL_GPL(sysfs_create_file);
EXPORT_SYMBOL_GPL(sysfs_remove_file);
static inline void orphan_all_buffers(struct inode *node)
{
- struct sysfs_buffer_collection *set = node->i_private;
+ struct sysfs_buffer_collection *set;
struct sysfs_buffer *buf;
mutex_lock_nested(&node->i_mutex, I_MUTEX_CHILD);
- if (node->i_private) {
- list_for_each_entry(buf, &set->associates, associates)
+ set = node->i_private;
+ if (set) {
+ list_for_each_entry(buf, &set->associates, associates) {
+ down(&buf->sem);
buf->orphaned = 1;
+ up(&buf->sem);
+ }
}
mutex_unlock(&node->i_mutex);
}
* We can come here from ufs_writepage or ufs_prepare_write,
* locked_page is argument of these functions, so we already lock it.
*/
-static void ufs_change_blocknr(struct inode *inode, unsigned int beg,
- unsigned int count, unsigned int oldb,
- unsigned int newb, struct page *locked_page)
+static void ufs_change_blocknr(struct inode *inode, sector_t beg,
+ unsigned int count, sector_t oldb,
+ sector_t newb, struct page *locked_page)
{
- const unsigned mask = (1 << (PAGE_CACHE_SHIFT - inode->i_blkbits)) - 1;
+ const unsigned blks_per_page =
+ 1 << (PAGE_CACHE_SHIFT - inode->i_blkbits);
+ const unsigned mask = blks_per_page - 1;
struct address_space * const mapping = inode->i_mapping;
- pgoff_t index, cur_index;
- unsigned end, pos, j;
+ pgoff_t index, cur_index, last_index;
+ unsigned pos, j, lblock;
+ sector_t end, i;
struct page *page;
struct buffer_head *head, *bh;
- UFSD("ENTER, ino %lu, count %u, oldb %u, newb %u\n",
- inode->i_ino, count, oldb, newb);
+ UFSD("ENTER, ino %lu, count %u, oldb %llu, newb %llu\n",
+ inode->i_ino, count,
+ (unsigned long long)oldb, (unsigned long long)newb);
BUG_ON(!locked_page);
BUG_ON(!PageLocked(locked_page));
cur_index = locked_page->index;
-
- for (end = count + beg; beg < end; beg = (beg | mask) + 1) {
- index = beg >> (PAGE_CACHE_SHIFT - inode->i_blkbits);
+ end = count + beg;
+ last_index = end >> (PAGE_CACHE_SHIFT - inode->i_blkbits);
+ for (i = beg; i < end; i = (i | mask) + 1) {
+ index = i >> (PAGE_CACHE_SHIFT - inode->i_blkbits);
if (likely(cur_index != index)) {
page = ufs_get_locked_page(mapping, index);
- if (!page || IS_ERR(page)) /* it was truncated or EIO */
+ if (!page)/* it was truncated */
+ continue;
+ if (IS_ERR(page)) {/* or EIO */
+ ufs_error(inode->i_sb, __FUNCTION__,
+ "read of page %llu failed\n",
+ (unsigned long long)index);
continue;
+ }
} else
page = locked_page;
head = page_buffers(page);
bh = head;
- pos = beg & mask;
+ pos = i & mask;
for (j = 0; j < pos; ++j)
bh = bh->b_this_page;
- j = 0;
+
+
+ if (unlikely(index == last_index))
+ lblock = end & mask;
+ else
+ lblock = blks_per_page;
+
do {
- if (buffer_mapped(bh)) {
- pos = bh->b_blocknr - oldb;
- if (pos < count) {
- UFSD(" change from %llu to %llu\n",
- (unsigned long long)pos + oldb,
- (unsigned long long)pos + newb);
- bh->b_blocknr = newb + pos;
- unmap_underlying_metadata(bh->b_bdev,
- bh->b_blocknr);
- mark_buffer_dirty(bh);
- ++j;
+ if (j >= lblock)
+ break;
+ pos = (i - beg) + j;
+
+ if (!buffer_mapped(bh))
+ map_bh(bh, inode->i_sb, oldb + pos);
+ if (!buffer_uptodate(bh)) {
+ ll_rw_block(READ, 1, &bh);
+ wait_on_buffer(bh);
+ if (!buffer_uptodate(bh)) {
+ ufs_error(inode->i_sb, __FUNCTION__,
+ "read of block failed\n");
+ break;
}
}
+ UFSD(" change from %llu to %llu, pos %u\n",
+ (unsigned long long)pos + oldb,
+ (unsigned long long)pos + newb, pos);
+
+ bh->b_blocknr = newb + pos;
+ unmap_underlying_metadata(bh->b_bdev,
+ bh->b_blocknr);
+ mark_buffer_dirty(bh);
+ ++j;
bh = bh->b_this_page;
} while (bh != head);
- if (j)
- set_page_dirty(page);
-
if (likely(cur_index != index))
ufs_put_locked_page(page);
}
if (result) {
ufs_clear_frags(inode, result + oldcount, newcount - oldcount,
locked_page != NULL);
- ufs_change_blocknr(inode, fragment - oldcount, oldcount, tmp,
- result, locked_page);
+ ufs_change_blocknr(inode, fragment - oldcount, oldcount,
+ uspi->s_sbbase + tmp,
+ uspi->s_sbbase + result, locked_page);
ufs_cpu_to_data_ptr(sb, p, result);
*err = 0;
UFS_I(inode)->i_lastfrag = max_t(u32, UFS_I(inode)->i_lastfrag, fragment + count);
lock_buffer(bh);
ufs2_inode = (struct ufs2_inode *)bh->b_data;
ufs2_inode += ufs_inotofsbo(inode->i_ino);
- ufs2_inode->ui_birthtime.tv_sec =
- cpu_to_fs32(sb, CURRENT_TIME_SEC.tv_sec);
- ufs2_inode->ui_birthtime.tv_usec = 0;
+ ufs2_inode->ui_birthtime = cpu_to_fs64(sb, CURRENT_TIME.tv_sec);
+ ufs2_inode->ui_birthnsec = cpu_to_fs32(sb, CURRENT_TIME.tv_nsec);
mark_buffer_dirty(bh);
unlock_buffer(bh);
if (sb->s_flags & MS_SYNCHRONOUS)
brelse (result);
goto repeat;
} else {
- *phys = tmp + blockoff;
+ *phys = uspi->s_sbbase + tmp + blockoff;
return NULL;
}
}
}
if (!phys) {
- result = sb_getblk(sb, tmp + blockoff);
+ result = sb_getblk(sb, uspi->s_sbbase + tmp + blockoff);
} else {
- *phys = tmp + blockoff;
+ *phys = uspi->s_sbbase + tmp + blockoff;
result = NULL;
*err = 0;
*new = 1;
brelse (result);
goto repeat;
} else {
- *phys = tmp + blockoff;
+ *phys = uspi->s_sbbase + tmp + blockoff;
goto out;
}
}
if (!phys) {
- result = sb_getblk(sb, tmp + blockoff);
+ result = sb_getblk(sb, uspi->s_sbbase + tmp + blockoff);
} else {
- *phys = tmp + blockoff;
+ *phys = uspi->s_sbbase + tmp + blockoff;
*new = 1;
}
ufs_get_inode_dev(inode->i_sb, UFS_I(inode)));
}
-static void ufs1_read_inode(struct inode *inode, struct ufs_inode *ufs_inode)
+static int ufs1_read_inode(struct inode *inode, struct ufs_inode *ufs_inode)
{
struct ufs_inode_info *ufsi = UFS_I(inode);
struct super_block *sb = inode->i_sb;
*/
inode->i_mode = mode = fs16_to_cpu(sb, ufs_inode->ui_mode);
inode->i_nlink = fs16_to_cpu(sb, ufs_inode->ui_nlink);
- if (inode->i_nlink == 0)
+ if (inode->i_nlink == 0) {
ufs_error (sb, "ufs_read_inode", "inode %lu has zero nlink\n", inode->i_ino);
+ return -1;
+ }
/*
* Linux now has 32-bit uid and gid, so we can support EFT.
for (i = 0; i < (UFS_NDADDR + UFS_NINDIR) * 4; i++)
ufsi->i_u1.i_symlink[i] = ufs_inode->ui_u2.ui_symlink[i];
}
+ return 0;
}
-static void ufs2_read_inode(struct inode *inode, struct ufs2_inode *ufs2_inode)
+static int ufs2_read_inode(struct inode *inode, struct ufs2_inode *ufs2_inode)
{
struct ufs_inode_info *ufsi = UFS_I(inode);
struct super_block *sb = inode->i_sb;
*/
inode->i_mode = mode = fs16_to_cpu(sb, ufs2_inode->ui_mode);
inode->i_nlink = fs16_to_cpu(sb, ufs2_inode->ui_nlink);
- if (inode->i_nlink == 0)
+ if (inode->i_nlink == 0) {
ufs_error (sb, "ufs_read_inode", "inode %lu has zero nlink\n", inode->i_ino);
+ return -1;
+ }
/*
* Linux now has 32-bit uid and gid, so we can support EFT.
inode->i_gid = fs32_to_cpu(sb, ufs2_inode->ui_gid);
inode->i_size = fs64_to_cpu(sb, ufs2_inode->ui_size);
- inode->i_atime.tv_sec = fs32_to_cpu(sb, ufs2_inode->ui_atime.tv_sec);
- inode->i_ctime.tv_sec = fs32_to_cpu(sb, ufs2_inode->ui_ctime.tv_sec);
- inode->i_mtime.tv_sec = fs32_to_cpu(sb, ufs2_inode->ui_mtime.tv_sec);
- inode->i_mtime.tv_nsec = 0;
- inode->i_atime.tv_nsec = 0;
- inode->i_ctime.tv_nsec = 0;
+ inode->i_atime.tv_sec = fs64_to_cpu(sb, ufs2_inode->ui_atime);
+ inode->i_ctime.tv_sec = fs64_to_cpu(sb, ufs2_inode->ui_ctime);
+ inode->i_mtime.tv_sec = fs64_to_cpu(sb, ufs2_inode->ui_mtime);
+ inode->i_atime.tv_nsec = fs32_to_cpu(sb, ufs2_inode->ui_atimensec);
+ inode->i_ctime.tv_nsec = fs32_to_cpu(sb, ufs2_inode->ui_ctimensec);
+ inode->i_mtime.tv_nsec = fs32_to_cpu(sb, ufs2_inode->ui_mtimensec);
inode->i_blocks = fs64_to_cpu(sb, ufs2_inode->ui_blocks);
inode->i_generation = fs32_to_cpu(sb, ufs2_inode->ui_gen);
ufsi->i_flags = fs32_to_cpu(sb, ufs2_inode->ui_flags);
for (i = 0; i < (UFS_NDADDR + UFS_NINDIR) * 4; i++)
ufsi->i_u1.i_symlink[i] = ufs2_inode->ui_u2.ui_symlink[i];
}
+ return 0;
}
void ufs_read_inode(struct inode * inode)
struct super_block * sb;
struct ufs_sb_private_info * uspi;
struct buffer_head * bh;
+ int err;
UFSD("ENTER, ino %lu\n", inode->i_ino);
if ((UFS_SB(sb)->s_flags & UFS_TYPE_MASK) == UFS_TYPE_UFS2) {
struct ufs2_inode *ufs2_inode = (struct ufs2_inode *)bh->b_data;
- ufs2_read_inode(inode,
- ufs2_inode + ufs_inotofsbo(inode->i_ino));
+ err = ufs2_read_inode(inode,
+ ufs2_inode + ufs_inotofsbo(inode->i_ino));
} else {
struct ufs_inode *ufs_inode = (struct ufs_inode *)bh->b_data;
- ufs1_read_inode(inode, ufs_inode + ufs_inotofsbo(inode->i_ino));
+ err = ufs1_read_inode(inode,
+ ufs_inode + ufs_inotofsbo(inode->i_ino));
}
+ if (err)
+ goto bad_inode;
inode->i_version++;
ufsi->i_lastfrag =
(inode->i_size + uspi->s_fsize - 1) >> uspi->s_fshift;
ufs_inode->ui_gid = cpu_to_fs32(sb, inode->i_gid);
ufs_inode->ui_size = cpu_to_fs64(sb, inode->i_size);
- ufs_inode->ui_atime.tv_sec = cpu_to_fs32(sb, inode->i_atime.tv_sec);
- ufs_inode->ui_atime.tv_usec = 0;
- ufs_inode->ui_ctime.tv_sec = cpu_to_fs32(sb, inode->i_ctime.tv_sec);
- ufs_inode->ui_ctime.tv_usec = 0;
- ufs_inode->ui_mtime.tv_sec = cpu_to_fs32(sb, inode->i_mtime.tv_sec);
- ufs_inode->ui_mtime.tv_usec = 0;
+ ufs_inode->ui_atime = cpu_to_fs64(sb, inode->i_atime.tv_sec);
+ ufs_inode->ui_atimensec = cpu_to_fs32(sb, inode->i_atime.tv_nsec);
+ ufs_inode->ui_ctime = cpu_to_fs64(sb, inode->i_ctime.tv_sec);
+ ufs_inode->ui_ctimensec = cpu_to_fs32(sb, inode->i_ctime.tv_nsec);
+ ufs_inode->ui_mtime = cpu_to_fs64(sb, inode->i_mtime.tv_sec);
+ ufs_inode->ui_mtimensec = cpu_to_fs32(sb, inode->i_mtime.tv_nsec);
ufs_inode->ui_blocks = cpu_to_fs64(sb, inode->i_blocks);
ufs_inode->ui_flags = cpu_to_fs32(sb, ufsi->i_flags);
loff_t old_i_size;
truncate_inode_pages(&inode->i_data, 0);
+ if (is_bad_inode(inode))
+ goto no_delete;
/*UFS_I(inode)->i_dtime = CURRENT_TIME;*/
lock_kernel();
mark_inode_dirty(inode);
ufs_warning(inode->i_sb, __FUNCTION__, "ufs_truncate failed\n");
ufs_free_inode (inode);
unlock_kernel();
+ return;
+no_delete:
+ clear_inode(inode); /* We must guarantee clearing of inode... */
}
unsigned i, tmp;
int retry;
- UFSD("ENTER\n");
+ UFSD("ENTER: ino %lu\n", inode->i_ino);
sb = inode->i_sb;
uspi = UFS_SB(sb)->s_uspi;
block2 = ufs_fragstoblks (frag3);
}
- UFSD("frag1 %llu, frag2 %llu, block1 %llu, block2 %llu, frag3 %llu,"
- " frag4 %llu\n",
+ UFSD("ino %lu, frag1 %llu, frag2 %llu, block1 %llu, block2 %llu,"
+ " frag3 %llu, frag4 %llu\n", inode->i_ino,
(unsigned long long)frag1, (unsigned long long)frag2,
(unsigned long long)block1, (unsigned long long)block2,
(unsigned long long)frag3, (unsigned long long)frag4);
mark_inode_dirty(inode);
next3:
- UFSD("EXIT\n");
+ UFSD("EXIT: ino %lu\n", inode->i_ino);
return retry;
}
}
ubh_brelse (ind_ubh);
- UFSD("EXIT\n");
+ UFSD("EXIT: ino %lu\n", inode->i_ino);
return retry;
}
void *dind;
int retry = 0;
- UFSD("ENTER\n");
+ UFSD("ENTER: ino %lu\n", inode->i_ino);
sb = inode->i_sb;
uspi = UFS_SB(sb)->s_uspi;
}
ubh_brelse (dind_bh);
- UFSD("EXIT\n");
+ UFSD("EXIT: ino %lu\n", inode->i_ino);
return retry;
}
void *tind, *p;
int retry;
- UFSD("ENTER\n");
+ UFSD("ENTER: ino %lu\n", inode->i_ino);
retry = 0;
}
for (i = tindirect_block ; i < uspi->s_apb ; i++) {
- tind = ubh_get_addr32 (tind_bh, i);
+ tind = ubh_get_data_ptr(uspi, tind_bh, i);
retry |= ufs_trunc_dindirect(inode, UFS_NDADDR +
uspi->s_apb + ((i + 1) << uspi->s_2apbshift), tind);
ubh_mark_buffer_dirty(tind_bh);
}
ubh_brelse (tind_bh);
- UFSD("EXIT\n");
+ UFSD("EXIT: ino %lu\n", inode->i_ino);
return retry;
}
static int ufs_alloc_lastblock(struct inode *inode)
{
int err = 0;
+ struct super_block *sb = inode->i_sb;
struct address_space *mapping = inode->i_mapping;
- struct ufs_sb_private_info *uspi = UFS_SB(inode->i_sb)->s_uspi;
+ struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi;
unsigned i, end;
sector_t lastfrag;
struct page *lastpage;
struct buffer_head *bh;
+ u64 phys64;
lastfrag = (i_size_read(inode) + uspi->s_fsize - 1) >> uspi->s_fshift;
set_page_dirty(lastpage);
}
+ if (lastfrag >= UFS_IND_FRAGMENT) {
+ end = uspi->s_fpb - ufs_fragnum(lastfrag) - 1;
+ phys64 = bh->b_blocknr + 1;
+ for (i = 0; i < end; ++i) {
+ bh = sb_getblk(sb, i + phys64);
+ lock_buffer(bh);
+ memset(bh->b_data, 0, sb->s_blocksize);
+ set_buffer_uptodate(bh);
+ mark_buffer_dirty(bh);
+ unlock_buffer(bh);
+ sync_dirty_buffer(bh);
+ brelse(bh);
+ }
+ }
out_unlock:
ufs_put_locked_page(lastpage);
out:
if (!xfs_buf_zone)
goto out_free_trace_buf;
- xfslogd_workqueue = create_freezeable_workqueue("xfslogd");
+ xfslogd_workqueue = create_workqueue("xfslogd");
if (!xfslogd_workqueue)
goto out_free_buf_zone;
- xfsdatad_workqueue = create_freezeable_workqueue("xfsdatad");
+ xfsdatad_workqueue = create_workqueue("xfsdatad");
if (!xfsdatad_workqueue)
goto out_destroy_xfslogd_workqueue;
/*
* exutils - interpreter/scanner utilities
*/
-void acpi_ex_enter_interpreter(void);
+acpi_status acpi_ex_enter_interpreter(void);
void acpi_ex_exit_interpreter(void);
-void acpi_ex_reacquire_interpreter(void);
-
-void acpi_ex_relinquish_interpreter(void);
-
void acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc);
u8 acpi_ex_acquire_global_lock(u32 rule);
/* 64-bit integers */
-typedef u64 acpi_integer;
+typedef unsigned long long acpi_integer;
#define ACPI_INTEGER_MAX ACPI_UINT64_MAX
#define ACPI_INTEGER_BIT_SIZE 64
#define ACPI_MAX_DECIMAL_DIGITS 20 /* 2^64 = 18,446,744,073,709,551,616 */
# define __kernel_extbl(val, shift) __builtin_alpha_extbl(val, shift)
# define __kernel_extwl(val, shift) __builtin_alpha_extwl(val, shift)
# define __kernel_cmpbge(a, b) __builtin_alpha_cmpbge(a, b)
-# define __kernel_cttz(x) __builtin_ctzl(x)
-# define __kernel_ctlz(x) __builtin_clzl(x)
-# define __kernel_ctpop(x) __builtin_popcountl(x)
#else
# define __kernel_insbl(val, shift) \
({ unsigned long __kir; \
({ unsigned long __kir; \
__asm__("cmpbge %r2,%1,%0" : "=r"(__kir) : "rI"(b), "rJ"(a)); \
__kir; })
+#endif
+
+#ifdef __alpha_cix__
+# if __GNUC__ == 3 && __GNUC_MINOR__ >= 4 || __GNUC__ > 3
+# define __kernel_cttz(x) __builtin_ctzl(x)
+# define __kernel_ctlz(x) __builtin_clzl(x)
+# define __kernel_ctpop(x) __builtin_popcountl(x)
+# else
+# define __kernel_cttz(x) \
+ ({ unsigned long __kir; \
+ __asm__("cttz %1,%0" : "=r"(__kir) : "r"(x)); \
+ __kir; })
+# define __kernel_ctlz(x) \
+ ({ unsigned long __kir; \
+ __asm__("ctlz %1,%0" : "=r"(__kir) : "r"(x)); \
+ __kir; })
+# define __kernel_ctpop(x) \
+ ({ unsigned long __kir; \
+ __asm__("ctpop %1,%0" : "=r"(__kir) : "r"(x)); \
+ __kir; })
+# endif
+#else
# define __kernel_cttz(x) \
({ unsigned long __kir; \
- __asm__("cttz %1,%0" : "=r"(__kir) : "r"(x)); \
+ __asm__(".arch ev67; cttz %1,%0" : "=r"(__kir) : "r"(x)); \
__kir; })
# define __kernel_ctlz(x) \
({ unsigned long __kir; \
- __asm__("ctlz %1,%0" : "=r"(__kir) : "r"(x)); \
+ __asm__(".arch ev67; ctlz %1,%0" : "=r"(__kir) : "r"(x)); \
__kir; })
# define __kernel_ctpop(x) \
({ unsigned long __kir; \
- __asm__("ctpop %1,%0" : "=r"(__kir) : "r"(x)); \
+ __asm__(".arch ev67; ctpop %1,%0" : "=r"(__kir) : "r"(x)); \
__kir; })
#endif
#else
#define __kernel_ldbu(mem) \
({ unsigned char __kir; \
- __asm__("ldbu %0,%1" : "=r"(__kir) : "m"(mem)); \
+ __asm__(".arch ev56; \
+ ldbu %0,%1" : "=r"(__kir) : "m"(mem)); \
__kir; })
#define __kernel_ldwu(mem) \
({ unsigned short __kir; \
- __asm__("ldwu %0,%1" : "=r"(__kir) : "m"(mem)); \
+ __asm__(".arch ev56; \
+ ldwu %0,%1" : "=r"(__kir) : "m"(mem)); \
__kir; })
-#define __kernel_stb(val,mem) \
- __asm__("stb %1,%0" : "=m"(mem) : "r"(val))
-#define __kernel_stw(val,mem) \
- __asm__("stw %1,%0" : "=m"(mem) : "r"(val))
+#define __kernel_stb(val,mem) \
+ __asm__(".arch ev56; \
+ stb %1,%0" : "=m"(mem) : "r"(val))
+#define __kernel_stw(val,mem) \
+ __asm__(".arch ev56; \
+ stw %1,%0" : "=m"(mem) : "r"(val))
#endif
#ifdef __KERNEL__
*
*/
+#define MCPCIA_MAX_HOSES 4
+
#define MCPCIA_MID(m) ((unsigned long)(m) << 33)
/* Dodge has PCI0 and PCI1 at MID 4 and 5 respectively.
unsigned long bus = phys + __direct_map_base;
return phys <= __direct_map_size ? bus : 0;
}
+#define isa_virt_to_bus virt_to_bus
static inline void *bus_to_virt(unsigned long address)
{
}
extern int gpio_direction_input(unsigned gpio);
-extern int gpio_direction_output(unsigned gpio);
+extern int gpio_direction_output(unsigned gpio, int value);
static inline int gpio_get_value(unsigned gpio)
{
#define memcpy_fromio(a,c,l) _memcpy_fromio((a),(c),(l))
#define memcpy_toio(c,a,l) _memcpy_toio((c),(a),(l))
-static inline int
-check_signature(const unsigned char __iomem *bus_addr, const unsigned char *signature,
- int length)
-{
- int retval = 0;
- do {
- if (readb(bus_addr) != *signature)
- goto out;
- bus_addr++;
- signature++;
- length--;
- } while (length);
- retval = 1;
-out:
- return retval;
-}
-
#endif
#ifndef CONFIG_PCI
#define IXP4XX_INTC_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x3000)
#define IXP4XX_GPIO_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x4000)
#define IXP4XX_TIMER_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x5000)
-#define IXP4XX_NPEA_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_PHYS + 0x6000)
-#define IXP4XX_NPEB_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_PHYS + 0x7000)
-#define IXP4XX_NPEC_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_PHYS + 0x8000)
+#define IXP4XX_NPEA_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x6000)
+#define IXP4XX_NPEB_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x7000)
+#define IXP4XX_NPEC_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x8000)
#define IXP4XX_EthB_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x9000)
#define IXP4XX_EthC_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0xA000)
#define IXP4XX_USB_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0xB000)
.macro disable_fiq
.endm
+ .macro get_irqnr_preamble, base, tmp
+ .endm
+
+ .macro arch_ret_to_user, tmp1, tmp2
+ .endm
+
.macro get_irqnr_and_base, irqnr, irqstat, base, tmp
mov \irqnr, #0
mov \base, #io_p2v(0x80000000) @ APB registers
.macro disable_fiq
.endm
+ .macro get_irqnr_preamble, base, tmp
+ .endm
+
+ .macro arch_ret_to_user, tmp1, tmp2
+ .endm
+
.macro get_irqnr_and_base, irqnr, irqstat, base, tmp
mov \irqnr, #0 @ VIC1 irq base
mov \base, #io_p2v(0x80000000) @ APB registers
#ifndef __ASM_ARCH_CLOCK_H
#define __ASM_ARCH_CLOCK_H
+static inline u32 ns9xxx_systemclock(void) __attribute__((const));
static inline u32 ns9xxx_systemclock(void)
{
/*
return 353894400;
}
-static inline const u32 ns9xxx_cpuclock(void)
+static inline u32 ns9xxx_cpuclock(void) __attribute__((const));
+static inline u32 ns9xxx_cpuclock(void)
{
return ns9xxx_systemclock() / 2;
}
-static inline const u32 ns9xxx_ahbclock(void)
+static inline u32 ns9xxx_ahbclock(void) __attribute__((const));
+static inline u32 ns9xxx_ahbclock(void)
{
return ns9xxx_systemclock() / 4;
}
-static inline const u32 ns9xxx_bbusclock(void)
+static inline u32 ns9xxx_bbusclock(void) __attribute__((const));
+static inline u32 ns9xxx_bbusclock(void)
{
return ns9xxx_systemclock() / 8;
}
return __gpio_set_direction(gpio, 1);
}
-static inline int gpio_direction_output(unsigned gpio)
+static inline int gpio_direction_output(unsigned gpio, int value)
{
+ omap_set_gpio_dataout(gpio, value);
return __gpio_set_direction(gpio, 0);
}
return pxa_gpio_mode(gpio | GPIO_IN);
}
-static inline int gpio_direction_output(unsigned gpio)
+static inline int gpio_direction_output(unsigned gpio, int value)
{
- return pxa_gpio_mode(gpio | GPIO_OUT);
+ return pxa_gpio_mode(gpio | GPIO_OUT | (value ? 0 : GPIO_DFLT_LOW));
}
static inline int __gpio_get_value(unsigned gpio)
#define GPIO112_MMCCMD_MD (112 | GPIO_ALT_FN_1_OUT)
#define GPIO113_I2S_SYSCLK_MD (113 | GPIO_ALT_FN_1_OUT)
#define GPIO113_AC97_RESET_N_MD (113 | GPIO_ALT_FN_2_OUT)
-#define GPIO117_I2CSCL_MD (117 | GPIO_ALT_FN_1_OUT)
+#define GPIO117_I2CSCL_MD (117 | GPIO_ALT_FN_1_IN)
#define GPIO118_I2CSDA_MD (118 | GPIO_ALT_FN_1_IN)
/*
return 0;
}
-static inline int gpio_direction_output(unsigned gpio)
+static inline int gpio_direction_output(unsigned gpio, int value)
{
s3c2410_gpio_cfgpin(gpio, S3C2410_GPIO_OUTPUT);
+ /* REVISIT can we write the value first, to avoid glitching? */
+ s3c2410_gpio_setpin(gpio, value);
return 0;
}
}
extern int gpio_direction_input(unsigned gpio);
-extern int gpio_direction_output(unsigned gpio);
+extern int gpio_direction_output(unsigned gpio, int value);
static inline int gpio_get_value(unsigned gpio)
unsigned long tmp, tmp2;
__asm__ __volatile__("@ atomic_clear_mask\n"
-"1: ldrex %0, %2\n"
+"1: ldrex %0, [%2]\n"
" bic %0, %0, %3\n"
-" strex %1, %0, %2\n"
+" strex %1, %0, [%2]\n"
" teq %1, #0\n"
" bne 1b"
: "=&r" (tmp), "=&r" (tmp2)
#ifdef __KERNEL__
+#include <asm/memory.h>
#define CPU_ARCH_UNKNOWN 0
#define CPU_ARCH_ARMv3 1
#define vectors_high() (0)
#endif
-#if __LINUX_ARM_ARCH__ >= 6
+#if defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ >= 6
#define isb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
: : "r" (0) : "memory")
#define dsb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
#define dmb() __asm__ __volatile__ ("" : : : "memory")
#endif
-#define mb() barrier()
-#define rmb() barrier()
-#define wmb() barrier()
-#define read_barrier_depends() do { } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb() dmb()
-#define smp_rmb() dmb()
-#define smp_wmb() dmb()
-#define smp_read_barrier_depends() read_barrier_depends()
+#ifndef CONFIG_SMP
+#define mb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
+#define rmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
+#define wmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)
+#define smp_mb() barrier()
+#define smp_rmb() barrier()
+#define smp_wmb() barrier()
#else
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
-#define smp_read_barrier_depends() read_barrier_depends()
-#endif /* CONFIG_SMP */
+#define mb() dmb()
+#define rmb() dmb()
+#define wmb() dmb()
+#define smp_mb() dmb()
+#define smp_rmb() dmb()
+#define smp_wmb() dmb()
+#endif
+#define read_barrier_depends() do { } while(0)
+#define smp_read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; smp_mb(); } while (0)
#define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
#define __NR_move_pages (__NR_SYSCALL_BASE+344)
#define __NR_getcpu (__NR_SYSCALL_BASE+345)
/* 346 for epoll_pwait */
-#define __NR_sys_kexec_load (__NR_SYSCALL_BASE+347)
+#define __NR_kexec_load (__NR_SYSCALL_BASE+347)
/*
* The following SWIs are ARM private.
void gpio_free(unsigned int gpio);
int gpio_direction_input(unsigned int gpio);
-int gpio_direction_output(unsigned int gpio);
+int gpio_direction_output(unsigned int gpio, int value);
int gpio_get_value(unsigned int gpio);
void gpio_set_value(unsigned int gpio, int value);
#ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
#define arch_enter_lazy_mmu_mode() do {} while (0)
#define arch_leave_lazy_mmu_mode() do {} while (0)
+#define arch_flush_lazy_mmu_mode() do {} while (0)
#endif
/*
#ifndef __HAVE_ARCH_ENTER_LAZY_CPU_MODE
#define arch_enter_lazy_cpu_mode() do {} while (0)
#define arch_leave_lazy_cpu_mode() do {} while (0)
+#define arch_flush_lazy_cpu_mode() do {} while (0)
#endif
/*
#define ARCH_APICTIMER_STOPS_ON_C3 1
extern int timer_over_8254;
+extern int local_apic_timer_c2_ok;
#else /* !CONFIG_X86_LOCAL_APIC */
static inline void lapic_shutdown(void) { }
#define X86_FEATURE_ARCH_PERFMON (3*32+11) /* Intel Architectural PerfMon */
#define X86_FEATURE_PEBS (3*32+12) /* Precise-Event Based Sampling */
#define X86_FEATURE_BTS (3*32+13) /* Branch Trace Store */
+#define X86_FEATURE_LAPIC_TIMER_BROKEN (3*32+ 14) /* lapic timer broken in C1 */
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */
#define X86_FEATURE_XMM3 (4*32+ 0) /* Streaming SIMD Extensions-3 */
pr_reg[4] = regs->edi; \
pr_reg[5] = regs->ebp; \
pr_reg[6] = regs->eax; \
- pr_reg[7] = regs->xds; \
- pr_reg[8] = regs->xes; \
- pr_reg[9] = regs->xfs; \
+ pr_reg[7] = regs->xds & 0xffff; \
+ pr_reg[8] = regs->xes & 0xffff; \
+ pr_reg[9] = regs->xfs & 0xffff; \
savesegment(gs,pr_reg[10]); \
pr_reg[11] = regs->orig_eax; \
pr_reg[12] = regs->eip; \
- pr_reg[13] = regs->xcs; \
+ pr_reg[13] = regs->xcs & 0xffff; \
pr_reg[14] = regs->eflags; \
pr_reg[15] = regs->esp; \
- pr_reg[16] = regs->xss;
+ pr_reg[16] = regs->xss & 0xffff;
/* This yields a mask that user programs can use to figure out what
instruction set this CPU supports. This could be done in user space,
#define MSR_K7_FID_VID_CTL 0xC0010041
#define MSR_K7_FID_VID_STATUS 0xC0010042
+#define MSR_K8_ENABLE_C1E 0xC0010055
+
/* extended feature register */
#define MSR_EFER 0xc0000080
extern atomic_t nmi_active;
extern unsigned int nmi_watchdog;
-#define NMI_DEFAULT 0
+#define NMI_DEFAULT -1
#define NMI_NONE 0
#define NMI_IO_APIC 1
#define NMI_LOCAL_APIC 2
void (*flush_tlb_kernel)(void);
void (*flush_tlb_single)(u32 addr);
- void (fastcall *map_pt_hook)(int type, pte_t *va, u32 pfn);
+ void (*map_pt_hook)(int type, pte_t *va, u32 pfn);
void (*alloc_pt)(u32 pfn);
void (*alloc_pd)(u32 pfn);
#define PARAVIRT_LAZY_NONE 0
#define PARAVIRT_LAZY_MMU 1
#define PARAVIRT_LAZY_CPU 2
+#define PARAVIRT_LAZY_FLUSH 3
#define __HAVE_ARCH_ENTER_LAZY_CPU_MODE
#define arch_enter_lazy_cpu_mode() paravirt_ops.set_lazy_mode(PARAVIRT_LAZY_CPU)
#define arch_leave_lazy_cpu_mode() paravirt_ops.set_lazy_mode(PARAVIRT_LAZY_NONE)
+#define arch_flush_lazy_cpu_mode() paravirt_ops.set_lazy_mode(PARAVIRT_LAZY_FLUSH)
#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
#define arch_enter_lazy_mmu_mode() paravirt_ops.set_lazy_mode(PARAVIRT_LAZY_MMU)
#define arch_leave_lazy_mmu_mode() paravirt_ops.set_lazy_mode(PARAVIRT_LAZY_NONE)
+#define arch_flush_lazy_mmu_mode() paravirt_ops.set_lazy_mode(PARAVIRT_LAZY_FLUSH)
/* These all sit in the .parainstructions section to tell us what to patch. */
struct paravirt_patch {
return oldbit;
}
-static __always_inline int sync_const_test_bit(int nr, const volatile unsigned long *addr)
+static __always_inline int sync_constant_test_bit(int nr, const volatile unsigned long *addr)
{
return ((1UL << (nr & 31)) &
(((const volatile unsigned int *)addr)[nr >> 5])) != 0;
#define B3000000 0010015
#define B3500000 0010016
#define B4000000 0010017
-#define CIBAUD 002003600000 /* input baud rate (not used) */
+#define CIBAUD 002003600000
#define CMSPAR 010000000000 /* mark or space (stick) parity */
#define CRTSCTS 020000000000 /* flow control */
#ifdef CONFIG_X86_LOCAL_APIC
extern void __init vmi_timer_setup_boot_alarm(void);
-extern void __init vmi_timer_setup_secondary_alarm(void);
+extern void __devinit vmi_timer_setup_secondary_alarm(void);
extern void apic_vmi_timer_interrupt(void);
#endif
# define platform_setup_msi_irq ia64_mv.setup_msi_irq
# define platform_teardown_msi_irq ia64_mv.teardown_msi_irq
# define platform_pci_fixup_bus ia64_mv.pci_fixup_bus
+# define platform_kernel_launch_event ia64_mv.kernel_launch_event
# endif
/* __attribute__((__aligned__(16))) is required to make size of the
platform_setup_msi_irq, \
platform_teardown_msi_irq, \
platform_pci_fixup_bus, \
+ platform_kernel_launch_event \
}
extern struct ia64_machine_vector ia64_mv;
extern void find_initrd (void);
extern int filter_rsvd_memory (unsigned long start, unsigned long end, void *arg);
extern void efi_memmap_init(unsigned long *, unsigned long *);
+extern int find_max_min_low_pfn (unsigned long , unsigned long, void *);
extern unsigned long vmcore_find_descriptor_size(unsigned long address);
extern int reserve_elfcorehdr(unsigned long *start, unsigned long *end);
unsigned int a, b;
};
-#define desc_empty(desc) (!((desc)->a + (desc)->b))
+#define desc_empty(desc) (!((desc)->a | (desc)->b))
#define desc_equal(desc1, desc2) (((desc1)->a == (desc2)->a) && ((desc1)->b == (desc2)->b))
#define GDT_ENTRY_TLS_ENTRIES 3
#define IS_PCI_BRIDGE_ASIC(asic) (asic == PCIIO_ASIC_TYPE_PIC || \
asic == PCIIO_ASIC_TYPE_TIOCP)
#define IS_PIC_SOFT(ps) (ps->pbi_bridge_type == PCIBR_BRIDGETYPE_PIC)
+#define IS_TIOCP_SOFT(ps) (ps->pbi_bridge_type == PCIBR_BRIDGETYPE_TIOCP)
/*
* Bridge PMU Address Transaltion Entry Attibutes
*/
#define PCI32_ATE_V (0x1 << 0)
-#define PCI32_ATE_CO (0x1 << 1)
-#define PCI32_ATE_PREC (0x1 << 2)
+#define PCI32_ATE_CO (0x1 << 1) /* PIC ASIC ONLY */
+#define PCI32_ATE_PIO (0x1 << 1) /* TIOCP ASIC ONLY */
#define PCI32_ATE_MSI (0x1 << 2)
#define PCI32_ATE_PREF (0x1 << 3)
#define PCI32_ATE_BAR (0x1 << 4)
#ifndef _ASM_M32R_DMA_MAPPING_H
#define _ASM_M32R_DMA_MAPPING_H
-/*
- * NOTE: Do not include <asm-generic/dma-mapping.h>
- * Because it requires PCI stuffs, but current M32R don't provide these.
- */
-
-static inline void *
-dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
- gfp_t flag)
-{
- return (void *)NULL;
-}
-
-static inline void
-dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
- dma_addr_t dma_handle)
-{
- return;
-}
+#include <asm-generic/dma-mapping-broken.h>
#endif /* _ASM_M32R_DMA_MAPPING_H */
void *, dma_addr_t);
static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,
- dma_addr_t *handle, int flag)
+ dma_addr_t *handle, gfp_t flag)
{
return dma_alloc_coherent(dev, size, handle, flag);
}
#include <asm/atarihw.h>
#define RTC_PORT(x) (TT_RTC_BAS + 2*(x))
+#define RTC_ALWAYS_BCD 0
#define CMOS_READ(addr) ({ \
atari_outb_p((addr),RTC_PORT(0)); \
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
v->counter += i;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
}
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
v->counter -= i;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
}
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
result = v->counter;
result += i;
v->counter = result;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
result = v->counter;
result -= i;
v->counter = result;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
result = v->counter;
result -= i;
if (result >= 0)
v->counter = result;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
v->counter += i;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
}
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
v->counter -= i;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
}
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
result = v->counter;
result += i;
v->counter = result;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
result = v->counter;
result -= i;
v->counter = result;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
result = v->counter;
result -= i;
if (result >= 0)
v->counter = result;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
smp_mb();
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
*a |= mask;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
}
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
*a &= ~mask;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
}
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
*a ^= mask;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
}
}
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
retval = (mask & *a) != 0;
*a |= mask;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
return retval;
}
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
retval = (mask & *a) != 0;
*a &= ~mask;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
return retval;
}
a += nr >> SZLONG_LOG;
mask = 1UL << bit;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
retval = (mask & *a) != 0;
*a ^= mask;
- local_irq_restore(flags);
+ raw_local_irq_restore(flags);
return retval;
}
#define BUG_ON(condition) \
do { \
- __asm__ __volatile__("tne $0, %0" : : "r" (condition)); \
+ __asm__ __volatile__("tne $0, %0, %1" \
+ : : "r" (condition), "i" (BRK_BUG)); \
} while (0)
#define HAVE_ARCH_BUG_ON
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
+#define ARCH_HAS_FLUSH_ANON_PAGE
+extern void __flush_anon_page(struct page *, unsigned long);
+static inline void flush_anon_page(struct vm_area_struct *vma,
+ struct page *page, unsigned long vmaddr)
+{
+ if (cpu_has_dc_aliases && PageAnon(page))
+ __flush_anon_page(page, vmaddr);
+}
+
static inline void flush_icache_page(struct vm_area_struct *vma,
struct page *page)
{
/* Run kernel code uncached, useful for cache probing functions. */
unsigned long __init run_uncached(void *func);
+extern void *kmap_coherent(struct page *page, unsigned long addr);
+extern void kunmap_coherent(struct page *page);
+
#endif /* _ASM_CACHEFLUSH_H */
#else
"r" (proto + len),
#endif
- "r" (sum));
+ "r" ((__force unsigned long)sum));
return sum;
}
#endif
#ifndef cpu_has_fpu
#define cpu_has_fpu (current_cpu_data.options & MIPS_CPU_FPU)
+#define raw_cpu_has_fpu (raw_current_cpu_data.options & MIPS_CPU_FPU)
+#else
+#define raw_cpu_has_fpu cpu_has_fpu
#endif
#ifndef cpu_has_32fpr
#define cpu_has_32fpr (cpu_data[0].options & MIPS_CPU_32FPR)
extern struct cpuinfo_mips cpu_data[];
#define current_cpu_data cpu_data[smp_processor_id()]
+#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
extern void cpu_probe(void);
extern void cpu_report(void);
__delay(usecs);
}
-#define __udelay_val cpu_data[smp_processor_id()].udelay_val
+#define __udelay_val cpu_data[raw_smp_processor_id()].udelay_val
#define udelay(usecs) __udelay((usecs),__udelay_val)
struct sigcontext;
struct sigcontext32;
-extern asmlinkage int (*save_fp_context)(struct sigcontext *sc);
-extern asmlinkage int (*restore_fp_context)(struct sigcontext *sc);
+extern asmlinkage int (*save_fp_context)(struct sigcontext __user *sc);
+extern asmlinkage int (*restore_fp_context)(struct sigcontext __user *sc);
-extern asmlinkage int (*save_fp_context32)(struct sigcontext32 *sc);
-extern asmlinkage int (*restore_fp_context32)(struct sigcontext32 *sc);
+extern asmlinkage int (*save_fp_context32)(struct sigcontext32 __user *sc);
+extern asmlinkage int (*restore_fp_context32)(struct sigcontext32 __user *sc);
extern void fpu_emulator_init_fpu(void);
extern void _init_fpu(void);
return cpu_has_fpu && __is_fpu_owner();
}
-static inline void own_fpu(void)
+static inline void __own_fpu(void)
{
- if (cpu_has_fpu) {
- __enable_fpu();
- KSTK_STATUS(current) |= ST0_CU1;
- set_thread_flag(TIF_USEDFPU);
+ __enable_fpu();
+ KSTK_STATUS(current) |= ST0_CU1;
+ set_thread_flag(TIF_USEDFPU);
+}
+
+static inline void own_fpu_inatomic(int restore)
+{
+ if (cpu_has_fpu && !__is_fpu_owner()) {
+ __own_fpu();
+ if (restore)
+ _restore_fp(current);
}
}
-static inline void lose_fpu(void)
+static inline void own_fpu(int restore)
{
- if (cpu_has_fpu) {
+ preempt_disable();
+ own_fpu_inatomic(restore);
+ preempt_enable();
+}
+
+static inline void lose_fpu(int save)
+{
+ preempt_disable();
+ if (is_fpu_owner()) {
+ if (save)
+ _save_fp(current);
KSTK_STATUS(current) &= ~ST0_CU1;
clear_thread_flag(TIF_USEDFPU);
__disable_fpu();
}
+ preempt_enable();
}
static inline void init_fpu(void)
{
+ preempt_disable();
if (cpu_has_fpu) {
+ __own_fpu();
_init_fpu();
} else {
fpu_emulator_init_fpu();
}
+ preempt_enable();
}
static inline void save_fp(struct task_struct *tsk)
_ehb
)
ASMMACRO(irq_enable_hazard,
+ _ehb
)
ASMMACRO(irq_disable_hazard,
_ehb
#ifndef __ASSEMBLY__
+#include <linux/compiler.h>
#include <asm/hazards.h>
-/*
- * CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY does prompt replay of deferred IPIs,
- * at the cost of branch and call overhead on each local_irq_restore()
- */
-
-#ifdef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY
-
-extern void smtc_ipi_replay(void);
-
-#define irq_restore_epilog(flags) \
-do { \
- if (!(flags & 0x0400)) \
- smtc_ipi_replay(); \
-} while (0)
-
-#else
-
-#define irq_restore_epilog(ignore) do { } while (0)
-
-#endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */
-
__asm__ (
" .macro raw_local_irq_enable \n"
" .set push \n"
" .set pop \n"
" .endm \n");
-#define raw_local_irq_restore(flags) \
-do { \
- unsigned long __tmp1; \
- \
- __asm__ __volatile__( \
- "raw_local_irq_restore\t%0" \
- : "=r" (__tmp1) \
- : "0" (flags) \
- : "memory"); \
- irq_restore_epilog(flags); \
-} while(0)
+extern void smtc_ipi_replay(void);
+
+static inline void raw_local_irq_restore(unsigned long flags)
+{
+ unsigned long __tmp1;
+
+#ifdef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY
+ /*
+ * CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY does prompt replay of deferred
+ * IPIs, at the cost of branch and call overhead on each
+ * local_irq_restore()
+ */
+ if (unlikely(!(flags & 0x0400)))
+ smtc_ipi_replay();
+#endif
+
+ __asm__ __volatile__(
+ "raw_local_irq_restore\t%0"
+ : "=r" (__tmp1)
+ : "0" (flags)
+ : "memory");
+}
static inline int raw_irqs_disabled_flags(unsigned long flags)
{
static void auide_setup_ports(hw_regs_t *hw, _auide_hwif *ahwif);
int __init auide_probe(void);
-#ifdef CONFIG_PM
- int au1200ide_pm_callback( au1xxx_power_dev_t *dev,
- au1xxx_request_t request, void *data);
- static int au1xxxide_pm_standby( au1xxx_power_dev_t *dev );
- static int au1xxxide_pm_sleep( au1xxx_power_dev_t *dev );
- static int au1xxxide_pm_resume( au1xxx_power_dev_t *dev );
- static int au1xxxide_pm_getstatus( au1xxx_power_dev_t *dev );
- static int au1xxxide_pm_access( au1xxx_power_dev_t *dev );
- static int au1xxxide_pm_idle( au1xxx_power_dev_t *dev );
- static int au1xxxide_pm_cleanup( au1xxx_power_dev_t *dev );
-#endif
-
-
-/*
- * Multi-Word DMA + DbDMA functions
- */
-#ifdef CONFIG_BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA
- static int auide_build_sglist(ide_drive_t *drive, struct request *rq);
- static int auide_build_dmatable(ide_drive_t *drive);
- static int auide_dma_end(ide_drive_t *drive);
- ide_startstop_t auide_dma_intr (ide_drive_t *drive);
- static void auide_dma_exec_cmd(ide_drive_t *drive, u8 command);
- static int auide_dma_setup(ide_drive_t *drive);
- static int auide_dma_check(ide_drive_t *drive);
- static int auide_dma_test_irq(ide_drive_t *drive);
- static int auide_dma_host_off(ide_drive_t *drive);
- static int auide_dma_host_on(ide_drive_t *drive);
- static int auide_dma_lostirq(ide_drive_t *drive);
- static int auide_dma_on(ide_drive_t *drive);
- static void auide_ddma_tx_callback(int irq, void *param);
- static void auide_ddma_rx_callback(int irq, void *param);
- static int auide_dma_off_quietly(ide_drive_t *drive);
-#endif /* end CONFIG_BLK_DEV_IDE_AU1XXX_MDMA2_DBDMA */
-
/*******************************************************************************
* PIO Mode timing calculation : *
* *
struct device;
-static dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, size_t size)
+static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr,
+ size_t size)
{
dma_addr_t pa = dev_to_baddr(dev, virt_to_phys(addr));
return dma_addr & (0xffUL << 56);
}
-static void plat_unmap_dma_mem(dma_addr_t dma_addr)
+static inline void plat_unmap_dma_mem(dma_addr_t dma_addr)
{
}
#define RAM_OFFSET_MASK 0x3fffffffUL
-static dma_addr_t plat_map_dma_mem(struct device *dev, void *addr, size_t size)
+static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr,
+ size_t size)
{
dma_addr_t pa = virt_to_phys(addr) & RAM_OFFSET_MASK;
return addr;
}
-static void plat_unmap_dma_mem(dma_addr_t dma_addr)
+static inline void plat_unmap_dma_mem(dma_addr_t dma_addr)
{
}
};
extern void ll_mv64340_irq(void);
+extern void mv64340_irq_init(unsigned int base);
#endif /* __ASM_MIPS_MARVELL_H */
{
return pud_val(pud);
}
-#define pud_phys(pud) (pud_val(pud) - PAGE_OFFSET)
+#define pud_phys(pud) virt_to_phys((void *)pud_val(pud))
#define pud_page(pud) (pfn_to_page(pud_phys(pud) >> PAGE_SHIFT))
/* Find an entry in the second-level page table.. */
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*/
-#define pmd_phys(pmd) (pmd_val(pmd) - PAGE_OFFSET)
+#define pmd_phys(pmd) virt_to_phys((void *)pmd_val(pmd))
#define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT))
#define pmd_page_vaddr(pmd) pmd_val(pmd)
extern int rtlx_open(int index, int can_sleep);
extern int rtlx_release(int index);
-extern ssize_t rtlx_read(int index, void *buff, size_t count, int user);
-extern ssize_t rtlx_write(int index, void *buffer, size_t count, int user);
+extern ssize_t rtlx_read(int index, void __user *buff, size_t count);
+extern ssize_t rtlx_write(int index, const void __user *buffer, size_t count);
extern unsigned int rtlx_read_poll(int index, int can_sleep);
extern unsigned int rtlx_write_poll(int index);
register signed int __a2 __asm__("$5") = (int) (long) (a2); \
register signed int __a3 __asm__("$6") = (int) (long) (a3); \
register signed int __a4 __asm__("$7") = (int) (long) (a4); \
- register signed int __a5 = (a5); \
+ register signed int __a5 = (int) (long) (a5); \
long __vec = (long) romvec->dest; \
__asm__ __volatile__( \
"dsubu\t$29, 32\n\t" \
#endif
-#define IOADDR(a) ((volatile void __iomem *)(IO_BASE + (a)))
+#define IOADDR(a) ((void __iomem *)(IO_BASE + (a)))
#endif
#define K_SYS_REVISION_BCM112x_A2 0x21
#define K_SYS_REVISION_BCM112x_A3 0x22
#define K_SYS_REVISION_BCM112x_A4 0x23
+#define K_SYS_REVISION_BCM112x_B0 0x30
#define K_SYS_REVISION_BCM1480_S0 0x01
#define K_SYS_REVISION_BCM1480_A1 0x02
spin_unlock_irqrestore(&q->lock, flags);
}
-static inline struct smtc_ipi *smtc_ipi_dq(struct smtc_ipi_q *q)
+static inline struct smtc_ipi *__smtc_ipi_dq(struct smtc_ipi_q *q)
{
struct smtc_ipi *p;
- long flags;
- spin_lock_irqsave(&q->lock, flags);
if (q->head == NULL)
p = NULL;
else {
if (q->head == NULL)
q->tail = NULL;
}
+
+ return p;
+}
+
+static inline struct smtc_ipi *smtc_ipi_dq(struct smtc_ipi_q *q)
+{
+ unsigned long flags;
+ struct smtc_ipi *p;
+
+ spin_lock_irqsave(&q->lock, flags);
+ p = __smtc_ipi_dq(q);
spin_unlock_irqrestore(&q->lock, flags);
+
return p;
}
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
retval = *m;
*m = val;
- local_irq_restore(flags); /* implies memory barrier */
+ raw_local_irq_restore(flags); /* implies memory barrier */
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
retval = *m;
*m = val;
- local_irq_restore(flags); /* implies memory barrier */
+ raw_local_irq_restore(flags); /* implies memory barrier */
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
retval = *m;
if (retval == old)
*m = new;
- local_irq_restore(flags); /* implies memory barrier */
+ raw_local_irq_restore(flags); /* implies memory barrier */
}
smp_mb();
} else {
unsigned long flags;
- local_irq_save(flags);
+ raw_local_irq_save(flags);
retval = *m;
if (retval == old)
*m = new;
- local_irq_restore(flags); /* implies memory barrier */
+ raw_local_irq_restore(flags); /* implies memory barrier */
}
smp_mb();
._dma_setup = vdma_dma_setup
};
-static int fd_request_dma()
+static int fd_request_dma(void)
{
if (can_use_virtual_dma & 1) {
fd_ops = &virt_dma_ops;
#define PLPAR_HCALL_BUFSIZE 4
long plpar_hcall(unsigned long opcode, unsigned long *retbuf, ...);
+/**
+ * plpar_hcall_raw: - Make a hypervisor call without calculating hcall stats
+ * @opcode: The hypervisor call to make.
+ * @retbuf: Buffer to store up to 4 return arguments in.
+ *
+ * This call supports up to 6 arguments and 4 return arguments. Use
+ * PLPAR_HCALL_BUFSIZE to size the return argument buffer.
+ *
+ * Used when phyp interface needs to be called in real mode. Similar to
+ * plpar_hcall, but plpar_hcall_raw works in real mode and does not
+ * calculate hypervisor call statistics.
+ */
+long plpar_hcall_raw(unsigned long opcode, unsigned long *retbuf, ...);
+
/**
* plpar_hcall9: - Make a pseries hypervisor call with up to 9 return arguments
* @opcode: The hypervisor call to make.
u8 uccs; /* UCCx status register */
u8 res3[0x24];
__be16 utpt;
+ u8 res4[0x52];
u8 guemr; /* UCC general extended mode register */
- u8 res4[0x200 - 0x091];
+ u8 res5[0x200 - 0x091];
} __attribute__ ((packed));
/* QE UCC Fast */
int spu_irq_class_1_bottom(struct spu *spu);
void spu_irq_setaffinity(struct spu *spu, int cpu);
+extern void spu_invalidate_slbs(struct spu *spu);
+extern void spu_associate_mm(struct spu *spu, struct mm_struct *mm);
+
+/* Calls from the memory management to the SPU */
+struct mm_struct;
+extern void spu_flush_all_slbs(struct mm_struct *mm);
+
/* system callbacks from the SPU */
struct spu_syscall_block {
u64 nr_ret;
* @spu_chnlcnt_RW: Array of saved channel counts.
* @spu_chnldata_RW: Array of saved channel data.
* @suspend_time: Time stamp when decrementer disabled.
- * @slb_esid_RW: Array of saved SLB esid entries.
- * @slb_vsid_RW: Array of saved SLB vsid entries.
*
* Structure representing the whole of the SPU
* context save area (CSA). This struct contains
u32 spu_mailbox_data[4];
u32 pu_mailbox_data[1];
unsigned long suspend_time;
- u64 slb_esid_RW[8];
- u64 slb_vsid_RW[8];
spinlock_t register_lock;
};
SYSCALL_SPU(unshare)
SYSCALL_SPU(splice)
SYSCALL_SPU(tee)
-SYSCALL_SPU(vmsplice)
+COMPAT_SYS_SPU(vmsplice)
COMPAT_SYS_SPU(openat)
SYSCALL_SPU(mkdirat)
SYSCALL_SPU(mknodat)
SYSCALL_SPU(faccessat)
COMPAT_SYS_SPU(get_robust_list)
COMPAT_SYS_SPU(set_robust_list)
-COMPAT_SYS(move_pages)
+COMPAT_SYS_SPU(move_pages)
SYSCALL_SPU(getcpu)
+COMPAT_SYS(epoll_pwait)
#define __NR_get_robust_list 299
#define __NR_set_robust_list 300
#define __NR_move_pages 301
+#define __NR_getcpu 302
+#define __NR_epoll_pwait 303
#ifdef __KERNEL__
-#define __NR_syscalls 302
+#define __NR_syscalls 304
#define __NR__exit __NR_exit
#define NR_syscalls __NR_syscalls
unsigned short len, unsigned short proto,
__wsum sum)
{
-#ifndef __s390x__
- asm volatile(
- " alr %0,%1\n" /* sum += saddr */
- " brc 12,0f\n"
- " ahi %0,1\n" /* add carry */
- "0:"
- : "+&d" (sum) : "d" (saddr) : "cc");
- asm volatile(
- " alr %0,%1\n" /* sum += daddr */
- " brc 12,1f\n"
- " ahi %0,1\n" /* add carry */
- "1:"
- : "+&d" (sum) : "d" (daddr) : "cc");
- asm volatile(
- " alr %0,%1\n" /* sum += len + proto */
- " brc 12,2f\n"
- " ahi %0,1\n" /* add carry */
- "2:"
- : "+&d" (sum)
- : "d" (len + proto)
- : "cc");
-#else /* __s390x__ */
- asm volatile(
- " lgfr %0,%0\n"
- " algr %0,%1\n" /* sum += saddr */
- " brc 12,0f\n"
- " aghi %0,1\n" /* add carry */
- "0: algr %0,%2\n" /* sum += daddr */
- " brc 12,1f\n"
- " aghi %0,1\n" /* add carry */
- "1: algfr %0,%3\n" /* sum += len + proto */
- " brc 12,2f\n"
- " aghi %0,1\n" /* add carry */
- "2: srlg 0,%0,32\n"
- " alr %0,0\n" /* fold to 32 bits */
- " brc 12,3f\n"
- " ahi %0,1\n" /* add carry */
- "3: llgfr %0,%0"
- : "+&d" (sum)
- : "d" (saddr), "d" (daddr),
- "d" (len + proto)
- : "cc", "0");
-#endif /* __s390x__ */
- return sum;
+ __u32 csum = (__force __u32)sum;
+
+ csum += (__force __u32)saddr;
+ if (csum < (__force __u32)saddr)
+ csum++;
+
+ csum += (__force __u32)daddr;
+ if (csum < (__force __u32)daddr)
+ csum++;
+
+ csum += len + proto;
+ if (csum < len + proto)
+ csum++;
+
+ return (__force __wsum)csum;
}
/*
#define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_fcp))
+#define IPL_PARM_BLK0_FCP_LEN (sizeof(struct ipl_block_fcp) + 8)
+
#define IPL_PARM_BLK_CCW_LEN (sizeof(struct ipl_list_hdr) + \
sizeof(struct ipl_block_ccw))
+#define IPL_PARM_BLK0_CCW_LEN (sizeof(struct ipl_block_ccw) + 8)
+
#define IPL_MAX_SUPPORTED_VERSION (0)
#define IPL_PARMBLOCK_START ((struct ipl_parameter_block *) \
u8 vm_flags;
u8 reserved3[3];
u32 vm_parm_len;
+ u8 reserved4[80];
} __attribute__((packed));
struct ipl_parameter_block {
/* Number 310 is reserved for new sys_move_pages */
#define __NR_getcpu 311
#define __NR_epoll_pwait 312
+#define __NR_utimes 313
-#define NR_syscalls 313
+#define NR_syscalls 314
/*
* There are some system calls that are not present on 64 bit, some
*
*/
-#define HP680_BTN_IRQ IRQ0_IRQ
-#define HP680_TS_IRQ IRQ3_IRQ
-#define HP680_HD64461_IRQ IRQ4_IRQ
+#define HP680_BTN_IRQ 32 /* IRQ0_IRQ */
+#define HP680_TS_IRQ 35 /* IRQ3_IRQ */
+#define HP680_HD64461_IRQ 36 /* IRQ4_IRQ */
#define DAC_LCD_BRIGHTNESS 0
#define DAC_SPEAKER_VOLUME 1
#define TCSETSW 0x5403
#define TCSETSF 0x5404
-#define TCGETA _IOR('t', 23, struct termio)
-#define TCSETA _IOW('t', 24, struct termio)
-#define TCSETAW _IOW('t', 25, struct termio)
-#define TCSETAF _IOW('t', 28, struct termio)
+#define TCGETA 0x80127417 /* _IOR('t', 23, struct termio) */
+#define TCSETA 0x40127418 /* _IOW('t', 24, struct termio) */
+#define TCSETAW 0x40127419 /* _IOW('t', 25, struct termio) */
+#define TCSETAF 0x4012741C /* _IOW('t', 28, struct termio) */
#define TCSBRK _IO('t', 29)
#define TCXONC _IO('t', 30)
#define TCFLSH _IO('t', 31)
-#define TIOCSWINSZ _IOW('t', 103, struct winsize)
-#define TIOCGWINSZ _IOR('t', 104, struct winsize)
+#define TIOCSWINSZ 0x40087467 /* _IOW('t', 103, struct winsize) */
+#define TIOCGWINSZ 0x80087468 /* _IOR('t', 104, struct winsize) */
#define TIOCSTART _IO('t', 110) /* start output, like ^Q */
#define TIOCSTOP _IO('t', 111) /* stop output, like ^S */
#define TIOCOUTQ _IOR('t', 115, int) /* output queue size */
#define TIOCSSOFTCAR _IOW('T', 26, unsigned int) /* 0x541A */
#define TIOCLINUX _IOW('T', 28, char) /* 0x541C */
#define TIOCCONS _IO('T', 29) /* 0x541D */
-#define TIOCGSERIAL _IOR('T', 30, struct serial_struct) /* 0x541E */
-#define TIOCSSERIAL _IOW('T', 31, struct serial_struct) /* 0x541F */
+#define TIOCGSERIAL 0x803C541E /* _IOR('T', 30, struct serial_struct) 0x541E */
+#define TIOCSSERIAL 0x403C541F /* _IOW('T', 31, struct serial_struct) 0x541F */
#define TIOCPKT _IOW('T', 32, int) /* 0x5420 */
# define TIOCPKT_DATA 0
# define TIOCPKT_FLUSHREAD 1
#define TIOCSERSWILD _IOW('T', 85, int) /* 0x5455 */
#define TIOCGLCKTRMIOS 0x5456
#define TIOCSLCKTRMIOS 0x5457
-#define TIOCSERGSTRUCT _IOR('T', 88, struct async_struct) /* 0x5458 */ /* For debugging only */
+#define TIOCSERGSTRUCT 0x80d85458 /* _IOR('T', 88, struct async_struct) 0x5458 */ /* For debugging only */
#define TIOCSERGETLSR _IOR('T', 89, unsigned int) /* 0x5459 */ /* Get line status register */
/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
# define TIOCSER_TEMT 0x01 /* Transmitter physically empty */
-#define TIOCSERGETMULTI _IOR('T', 90, struct serial_multiport_struct) /* 0x545A */ /* Get multiport config */
-#define TIOCSERSETMULTI _IOW('T', 91, struct serial_multiport_struct) /* 0x545B */ /* Set multiport config */
+#define TIOCSERGETMULTI 0x80A8545A /* _IOR('T', 90, struct serial_multiport_struct) 0x545A */ /* Get multiport config */
+#define TIOCSERSETMULTI 0x40A8545B /* _IOW('T', 91, struct serial_multiport_struct) 0x545B */ /* Set multiport config */
#define TIOCMIWAIT _IO('T', 92) /* 0x545C */ /* wait for a change on serial input line(s) */
#define TIOCGICOUNT 0x545D /* read serial port inline interrupt counts */
/*
* Convert back and forth between INTEVT and IRQ values.
*/
+#ifdef CONFIG_CPU_HAS_INTEVT
#define evt2irq(evt) (((evt) >> 5) - 16)
#define irq2evt(irq) (((irq) + 16) << 5)
+#else
+#define evt2irq(evt) (evt)
+#define irq2evt(irq) (irq)
+#endif
/*
* Simple Mask Register Support
*/
#include <linux/irqflags.h>
+#include <linux/compiler.h>
#include <asm/types.h>
/*
#define __NR_fcntl64 221
/* 223 is unused */
#define __NR_gettid 224
+#define __NR_readahead 225
#define __NR_setxattr 226
#define __NR_lsetxattr 227
#define __NR_fsetxattr 228
unsigned long r_address; /* relocation addr */
unsigned int r_index:24; /* segment index or symbol index */
unsigned int r_extern:1; /* if F, r_index==SEG#; if T, SYM idx */
- int r_pad:2; /* <unused> */
+ unsigned int r_pad:2; /* <unused> */
enum reloc_type r_type:5; /* type of relocation to perform */
long r_addend; /* addend for relocation value */
};
#ifdef CONFIG_PCI
#include <asm-generic/dma-mapping.h>
#else
-
-static inline void *dma_alloc_coherent(struct device *dev, size_t size,
- dma_addr_t *dma_handle, gfp_t flag)
-{
- BUG();
- return NULL;
-}
-
-static inline void dma_free_coherent(struct device *dev, size_t size,
- void *vaddr, dma_addr_t dma_handle)
-{
- BUG();
-}
-
+#include <asm-generic/dma-mapping-broken.h>
#endif /* PCI */
#endif /* _ASM_SPARC_DMA_MAPPING_H */
#define MSTK_DOW_MASK 0x07
#define MSTK_DOM_MASK 0x3f
#define MSTK_MONTH_MASK 0x1f
-#define MSTK_YEAR_MASK 0xff
+#define MSTK_YEAR_MASK 0xffU
/* Binary coded decimal conversion macros. */
#define MSTK_REGVAL_TO_DECIMAL(x) (((x) & 0x0F) + 0x0A * ((x) >> 0x04))
#define __NR_set_robust_list 300
#define __NR_get_robust_list 301
#define __NR_migrate_pages 302
+#define __NR_mbind 303
+#define __NR_get_mempolicy 304
+#define __NR_set_mempolicy 305
+#define __NR_kexec_load 306
+#define __NR_move_pages 307
+#define __NR_getcpu 308
+#define __NR_epoll_pwait 309
-#define NR_SYSCALLS 303
+#define NR_SYSCALLS 310
#ifdef __KERNEL__
-/* WARNING: You MAY NOT add syscall numbers larger than 302, since
- * all of the syscall tables in the Sparc kernel are
- * sized to have 302 entries (starting at zero). Therefore
- * find a free slot in the 0-302 range.
- */
-
#define __ARCH_WANT_IPC_PARSE_VERSION
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_STAT64
#define __ARCH_WANT_SYS_GETPGRP
#define __ARCH_WANT_SYS_LLSEEK
#define __ARCH_WANT_SYS_NICE
-#define __ARCH_WANT_SYS_OLD_GETRLIMIT
#define __ARCH_WANT_SYS_OLDUMOUNT
#define __ARCH_WANT_SYS_SIGPENDING
#define __ARCH_WANT_SYS_SIGPROCMASK
unsigned int r_address; /* relocation addr */
unsigned int r_index:24; /* segment index or symbol index */
unsigned int r_extern:1; /* if F, r_index==SEG#; if T, SYM idx */
- int r_pad:2; /* <unused> */
+ unsigned int r_pad:2; /* <unused> */
enum reloc_type r_type:5; /* type of relocation to perform */
int r_addend; /* addend for relocation value */
};
#define MSTK_DOW_MASK 0x07
#define MSTK_DOM_MASK 0x3f
#define MSTK_MONTH_MASK 0x1f
-#define MSTK_YEAR_MASK 0xff
+#define MSTK_YEAR_MASK 0xffU
/* Binary coded decimal conversion macros. */
#define MSTK_REGVAL_TO_DECIMAL(x) (((x) & 0x0F) + 0x0A * ((x) >> 0x04))
be,a,pt %xcc, OK_LABEL; \
mov REG4, REG1;
+#ifndef CONFIG_DEBUG_PAGEALLOC
/* This version uses a trick, the TAG is already (VADDR >> 22) so
* we can make use of that for the index computation.
*/
cmp REG3, TAG; \
be,a,pt %xcc, OK_LABEL; \
mov REG4, REG1;
+#endif
#endif /* !(_SPARC64_TSB_H) */
#define __NR_set_robust_list 300
#define __NR_get_robust_list 301
#define __NR_migrate_pages 302
+#define __NR_mbind 303
+#define __NR_get_mempolicy 304
+#define __NR_set_mempolicy 305
+#define __NR_kexec_load 306
+#define __NR_move_pages 307
+#define __NR_getcpu 308
+#define __NR_epoll_pwait 309
-#define NR_SYSCALLS 303
+#define NR_SYSCALLS 310
#ifdef __KERNEL__
-
-/* WARNING: You MAY NOT add syscall numbers larger than 302, since
- * all of the syscall tables in the Sparc kernel are
- * sized to have 302 entries (starting at zero). Therefore
- * find a free slot in the 0-302 range.
- */
-
/* sysconf options, for SunOS compatibility */
#define _SC_ARG_MAX 1
#define _SC_CHILD_MAX 2
#define __ARCH_WANT_SYS_GETPGRP
#define __ARCH_WANT_SYS_LLSEEK
#define __ARCH_WANT_SYS_NICE
-#define __ARCH_WANT_SYS_OLD_GETRLIMIT
#define __ARCH_WANT_SYS_OLDUMOUNT
#define __ARCH_WANT_SYS_SIGPENDING
#define __ARCH_WANT_SYS_SIGPROCMASK
PROVIDE (_unprotected_end = .);
. = ALIGN(4096);
+ .note : { *(.note.*) }
__start___ex_table = .;
__ex_table : { *(__ex_table) }
__stop___ex_table = .;
#ifndef __UM_DELAY_H
#define __UM_DELAY_H
-#include "asm/arch/delay.h"
-#include "asm/archparam.h"
-
#define MILLION 1000000
+/* Undefined on purpose */
+extern void __bad_udelay(void);
+
+extern void __udelay(unsigned long usecs);
+extern void __delay(unsigned long loops);
+
+#define udelay(n) ((__builtin_constant_p(n) && (n) > 20000) ? \
+ __bad_udelay() : __udelay(n))
+
+/* It appears that ndelay is not used at all for UML, and has never been
+ * implemented. */
+extern void __unimplemented_ndelay(void);
+#define ndelay(n) __unimplemented_ndelay()
+
#endif
((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
/*
- * Bits 0 through 3 are taken
+ * Bits 0 through 4 are taken
*/
-#define PTE_FILE_MAX_BITS 28
+#define PTE_FILE_MAX_BITS 27
-#define pte_to_pgoff(pte) (pte_val(pte) >> 4)
+#define pte_to_pgoff(pte) (pte_val(pte) >> 5)
-#define pgoff_to_pte(off) ((pte_t) { ((off) << 4) + _PAGE_FILE })
+#define pgoff_to_pte(off) ((pte_t) { ((off) << 5) + _PAGE_FILE })
#endif
#define ARCH_APICTIMER_STOPS_ON_C3 1
extern unsigned boot_cpu_id;
+extern int local_apic_timer_c2_ok;
#endif /* __ASM_APIC_H */
#define IRQ_MOVE_CLEANUP_VECTOR FIRST_EXTERNAL_VECTOR
/*
- * Vectors 0x20-0x2f are used for ISA interrupts.
+ * Vectors 0x30-0x3f are used for ISA interrupts.
*/
#define IRQ0_VECTOR FIRST_EXTERNAL_VECTOR + 0x10
#define IRQ1_VECTOR IRQ0_VECTOR + 1
extern atomic_t nmi_active;
extern unsigned int nmi_watchdog;
-#define NMI_DEFAULT 0
+#define NMI_DEFAULT -1
#define NMI_NONE 0
#define NMI_IO_APIC 1
#define NMI_LOCAL_APIC 2
extern int iommu_detected;
#ifdef CONFIG_IOMMU
extern void gart_iommu_init(void);
-extern void gart_parse_options(char *);
+extern void __init gart_parse_options(char *);
extern void iommu_hole_init(void);
extern int fallback_aper_order;
extern int fallback_aper_force;
#include <linux/threads.h>
#include <linux/cpumask.h>
#include <linux/bitops.h>
+#include <linux/init.h>
extern int disable_apic;
#include <asm/fixmap.h>
extern void __cpu_die(unsigned int cpu);
extern void prefill_possible_map(void);
extern unsigned num_processors;
-extern unsigned disabled_cpus;
+extern unsigned __cpuinitdata disabled_cpus;
#define NO_PROC_ID 0xFF /* No processor magic marker */
static inline int __copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
{
might_sleep();
- return __copy_user_nocache(dst, (__force void *)src, size, 1);
+ return __copy_user_nocache(dst, src, size, 1);
}
static inline int __copy_from_user_inatomic_nocache(void *dst, const void __user *src, unsigned size)
{
- return __copy_user_nocache(dst, (__force void *)src, size, 0);
+ return __copy_user_nocache(dst, src, size, 0);
}
#endif /* __X86_64_UACCESS_H */
ATA_MAX_DEVICES = 2, /* per bus/port */
ATA_MAX_PRD = 256, /* we could make these 256/256 */
ATA_SECT_SIZE = 512,
+ ATA_MAX_SECTORS_128 = 128,
ATA_MAX_SECTORS = 256,
ATA_MAX_SECTORS_LBA48 = 65535,/* TODO: 65536? */
void clear_bdi_congested(struct backing_dev_info *bdi, int rw);
void set_bdi_congested(struct backing_dev_info *bdi, int rw);
long congestion_wait(int rw, long timeout);
+long congestion_wait_interruptible(int rw, long timeout);
void congestion_end(int rw);
#define bdi_cap_writeback_dirty(bdi) \
#endif /* CONFIG_HAVE_ARCH_ALLOC_REMAP */
extern unsigned long __meminitdata nr_kernel_pages;
-extern unsigned long nr_all_pages;
+extern unsigned long __meminitdata nr_all_pages;
extern void *alloc_large_system_hash(const char *tablename,
unsigned long bucketsize,
# define __acquire(x) __context__(x,1)
# define __release(x) __context__(x,-1)
# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0)
-extern void __chk_user_ptr(void __user *);
-extern void __chk_io_ptr(void __iomem *);
+extern void __chk_user_ptr(const void __user *);
+extern void __chk_io_ptr(const void __iomem *);
#else
# define __user
# define __kernel
#endif /* CONFIG_HOTPLUG_CPU */
#ifdef CONFIG_SUSPEND_SMP
+extern int suspend_cpu_hotplug;
+
extern int disable_nonboot_cpus(void);
extern void enable_nonboot_cpus(void);
#else
+#define suspend_cpu_hotplug 0
+
static inline int disable_nonboot_cpus(void) { return 0; }
static inline void enable_nonboot_cpus(void) {}
#endif
struct module * owner;
const char * mod_name; /* used for built-in modules */
+ struct module_kobject * mkobj;
int (*probe) (struct device * dev);
int (*remove) (struct device * dev);
struct bin_attribute *attr);
extern void device_remove_bin_file(struct device *dev,
struct bin_attribute *attr);
+extern int device_schedule_callback(struct device *dev,
+ void (*func)(struct device *));
/* device resource management */
typedef void (*dr_release_t)(struct device *dev, void *res);
/*
* On x86-64 make the 64bit structure have the same alignment as the
* 32bit structure. This makes 32bit emulation easier.
+ *
+ * UML/x86_64 needs the same packing as x86_64 - UML + UML_X86 +
+ * 64_BIT adds up to UML/x86_64.
*/
#ifdef __x86_64__
#define EPOLL_PACKED __attribute__((packed))
#else
+#if defined(CONFIG_UML) && defined(CONFIG_UML_X86) && defined(CONFIG_64BIT)
+#define EPOLL_PACKED __attribute__((packed))
+#else
#define EPOLL_PACKED
#endif
+#endif
struct epoll_event {
__u32 events;
struct clock_event_device;
extern void clock_was_set(void);
+extern void hres_timers_resume(void);
extern void hrtimer_interrupt(struct clock_event_device *dev);
/*
*/
static inline void clock_was_set(void) { }
+static inline void hres_timers_resume(void) { }
+
/*
* In non high resolution mode the time reference is taken from
* the base softirq time variable.
u8 init_speed; /* transfer rate set at boot */
u8 pio_speed; /* unused by core, used by some drivers for fallback from DMA */
u8 current_speed; /* current transfer rate set */
+ u8 desired_speed; /* desired transfer rate set */
u8 dn; /* now wide spread use */
u8 wcache; /* status of write cache */
u8 acoustic; /* acoustic management */
int (*expiry)(ide_drive_t *);
/* ide_system_bus_speed */
int pio_clock;
+ int req_gen;
+ int req_gen_timer;
unsigned char cmd_buf[4];
} ide_hwgroup_t;
/*
* Managed iomap interface
*/
+#ifdef CONFIG_HAS_IOPORT
void __iomem * devm_ioport_map(struct device *dev, unsigned long port,
unsigned int nr);
void devm_ioport_unmap(struct device *dev, void __iomem *addr);
+#else
+static inline void __iomem *devm_ioport_map(struct device *dev,
+ unsigned long port,
+ unsigned int nr)
+{
+ return NULL;
+}
+
+static inline void devm_ioport_unmap(struct device *dev, void __iomem *addr)
+{
+}
+#endif
void __iomem * devm_ioremap(struct device *dev, unsigned long offset,
unsigned long size);
#ifdef CONFIG_SYSVIPC
#define INIT_IPC_NS(ns) .ns = &init_ipc_ns,
+extern int copy_ipcs(unsigned long flags, struct task_struct *tsk);
#else
#define INIT_IPC_NS(ns)
+static inline int copy_ipcs(unsigned long flags, struct task_struct *tsk)
+{ return 0; }
#endif
#ifdef CONFIG_IPC_NS
extern void free_ipc_ns(struct kref *kref);
-extern int copy_ipcs(unsigned long flags, struct task_struct *tsk);
extern int unshare_ipcs(unsigned long flags, struct ipc_namespace **ns);
-#else
-static inline int copy_ipcs(unsigned long flags, struct task_struct *tsk)
-{
- return 0;
-}
#endif
static inline struct ipc_namespace *get_ipc_ns(struct ipc_namespace *ns)
#endif
#endif
__s32 proxy_ndp;
+ __s32 accept_source_route;
void *sysctl;
};
DEVCONF_RTR_PROBE_INTERVAL,
DEVCONF_ACCEPT_RA_RT_INFO_MAX_PLEN,
DEVCONF_PROXY_NDP,
+ __DEVCONF_OPTIMISTIC_DAD,
+ DEVCONF_ACCEPT_SOURCE_ROUTE,
DEVCONF_MAX
};
extern void (*kbd_ledfunc)(unsigned int led);
-extern void set_console(int nr);
+extern int set_console(int nr);
extern void schedule_console_callback(void);
static inline void set_leds(void)
return dev & 0x3ffff;
}
-bool is_lanana_major(unsigned int major);
-
#else /* __KERNEL__ */
/*
} ktime_t;
#define KTIME_MAX ((s64)~((u64)1 << 63))
-#define KTIME_SEC_MAX (KTIME_MAX / NSEC_PER_SEC)
+#if (BITS_PER_LONG == 64)
+# define KTIME_SEC_MAX (KTIME_MAX / NSEC_PER_SEC)
+#else
+# define KTIME_SEC_MAX LONG_MAX
+#endif
/*
* ktime_t definitions when using the 64-bit scalar representation:
ATA_HORKAGE_DIAGNOSTIC = (1 << 0), /* Failed boot diag */
ATA_HORKAGE_NODMA = (1 << 1), /* DMA problems */
ATA_HORKAGE_NONCQ = (1 << 2), /* Don't use NCQ */
+ ATA_HORKAGE_MAX_SEC_128 = (1 << 3), /* Limit max sects to 128 */
+ ATA_HORKAGE_DMA_RW_ONLY = (1 << 4), /* ATAPI DMA for RW only */
};
enum hsm_task_states {
HSM_ST_IDLE, /* no command on going */
+ HSM_ST_FIRST, /* (waiting the device to)
+ write CDB or first data block */
HSM_ST, /* (waiting the device to) transfer data */
HSM_ST_LAST, /* (waiting the device to) complete command */
HSM_ST_ERR, /* error */
- HSM_ST_FIRST, /* (waiting the device to)
- write CDB or first data block */
};
enum ata_completion_errors {
# define INIT_LOCKDEP .lockdep_recursion = 0,
-#define lockdep_depth(tsk) ((tsk)->lockdep_depth)
+#define lockdep_depth(tsk) (debug_locks ? (tsk)->lockdep_depth : 0)
#else /* !LOCKDEP */
struct {
__u8 type : 5; /* {0: unused, 5h:MSI, 11h:MSI-X} */
__u8 maskbit : 1; /* mask-pending bit supported ? */
- __u8 unused : 1;
+ __u8 masked : 1;
__u8 is_64 : 1; /* Address size: 0=32bit 1=64bit */
__u8 pos; /* Location of the msi capability */
__u16 entry_nr; /* specific enabled entry */
void __iomem *mask_base;
struct pci_dev *dev;
-#ifdef CONFIG_PM
- /* PM save area for MSIX address/data */
- struct msi_msg msg_save;
-#endif
+ /* Last set MSI message */
+ struct msi_msg msg;
};
/*
/*
* linux/fs/nfs/write.c
*/
+extern int nfs_congestion_kb;
extern int nfs_writepage(struct page *page, struct writeback_control *wbc);
extern int nfs_writepages(struct address_space *, struct writeback_control *);
extern int nfs_flush_incompatible(struct file *file, struct page *page);
struct rpc_clnt * client_acl; /* ACL RPC client handle */
struct nfs_iostats * io_stats; /* I/O statistics */
struct backing_dev_info backing_dev_info;
+ atomic_t writeback; /* number of writeback pages */
int flags; /* various flags */
unsigned int caps; /* server capabilities */
unsigned int rsize; /* read size */
#define PG_NEED_COMMIT 1
#define PG_NEED_RESCHED 2
#define PG_NEED_FLUSH 3
-#define PG_FLUSHING 4
struct nfs_inode;
struct nfs_page {
};
#define NFS_WBACK_BUSY(req) (test_bit(PG_BUSY,&(req)->wb_flags))
-#define NFS_NEED_COMMIT(req) (test_bit(PG_NEED_COMMIT,&(req)->wb_flags))
-#define NFS_NEED_RESCHED(req) (test_bit(PG_NEED_RESCHED,&(req)->wb_flags))
extern struct nfs_page *nfs_create_request(struct nfs_open_context *ctx,
struct inode *inode,
req->wb_list_head = NULL;
}
-static inline int
-nfs_defer_commit(struct nfs_page *req)
-{
- return !test_and_set_bit(PG_NEED_COMMIT, &req->wb_flags);
-}
-
-static inline void
-nfs_clear_commit(struct nfs_page *req)
-{
- smp_mb__before_clear_bit();
- clear_bit(PG_NEED_COMMIT, &req->wb_flags);
- smp_mb__after_clear_bit();
-}
-
-static inline int
-nfs_defer_reschedule(struct nfs_page *req)
-{
- return !test_and_set_bit(PG_NEED_RESCHED, &req->wb_flags);
-}
-
-static inline void
-nfs_clear_reschedule(struct nfs_page *req)
-{
- smp_mb__before_clear_bit();
- clear_bit(PG_NEED_RESCHED, &req->wb_flags);
- smp_mb__after_clear_bit();
-}
-
static inline struct nfs_page *
nfs_list_entry(struct list_head *head)
{
hlist_add_head(&new_cap->next, &pci_dev->saved_cap_space);
}
-static inline void pci_remove_saved_cap(struct pci_cap_saved_state *cap)
-{
- hlist_del(&cap->next);
-}
-
/*
* For PCI devices, the region numbers are assigned this way:
*
#define PCI_MSIX_FLAGS 2
#define PCI_MSIX_FLAGS_QSIZE 0x7FF
#define PCI_MSIX_FLAGS_ENABLE (1 << 15)
+#define PCI_MSIX_FLAGS_MASKALL (1 << 14)
#define PCI_MSIX_FLAGS_BIRMASK (7 << 0)
#define PCI_MSIX_FLAGS_BITMASK (1 << 0)
loff_t *, size_t, unsigned int,
splice_actor *);
+extern ssize_t __splice_from_pipe(struct pipe_inode_info *, struct file *,
+ loff_t *, size_t, unsigned int,
+ splice_actor *);
+
#endif
#endif
/**
- * #PLIST_HEAD_INIT - static struct plist_head initializer
- *
+ * PLIST_HEAD_INIT - static struct plist_head initializer
* @head: struct plist_head variable name
+ * @_lock: lock to initialize for this list
*/
#define PLIST_HEAD_INIT(head, _lock) \
{ \
}
/**
- * #PLIST_NODE_INIT - static struct plist_node initializer
- *
+ * PLIST_NODE_INIT - static struct plist_node initializer
* @node: struct plist_node variable name
* @__prio: initial node priority
*/
/**
* plist_head_init - dynamic struct plist_head initializer
- *
* @head: &struct plist_head pointer
+ * @lock: list spinlock, remembered for debugging
*/
static inline void
plist_head_init(struct plist_head *head, spinlock_t *lock)
/**
* plist_node_init - Dynamic struct plist_node initializer
- *
* @node: &struct plist_node pointer
* @prio: initial node priority
*/
/**
* plist_for_each - iterate over the plist
- *
- * @pos1: the type * to use as a loop counter.
- * @head: the head for your list.
+ * @pos: the type * to use as a loop counter
+ * @head: the head for your list
*/
#define plist_for_each(pos, head) \
list_for_each_entry(pos, &(head)->node_list, plist.node_list)
/**
- * plist_for_each_entry_safe - iterate over a plist of given type safe
- * against removal of list entry
+ * plist_for_each_safe - iterate safely over a plist of given type
+ * @pos: the type * to use as a loop counter
+ * @n: another type * to use as temporary storage
+ * @head: the head for your list
*
- * @pos1: the type * to use as a loop counter.
- * @n1: another type * to use as temporary storage
- * @head: the head for your list.
+ * Iterate over a plist of given type, safe against removal of list entry.
*/
#define plist_for_each_safe(pos, n, head) \
list_for_each_entry_safe(pos, n, &(head)->node_list, plist.node_list)
/**
* plist_for_each_entry - iterate over list of given type
- *
- * @pos: the type * to use as a loop counter.
- * @head: the head for your list.
- * @member: the name of the list_struct within the struct.
+ * @pos: the type * to use as a loop counter
+ * @head: the head for your list
+ * @mem: the name of the list_struct within the struct
*/
#define plist_for_each_entry(pos, head, mem) \
list_for_each_entry(pos, &(head)->node_list, mem.plist.node_list)
/**
- * plist_for_each_entry_safe - iterate over list of given type safe against
- * removal of list entry
- *
- * @pos: the type * to use as a loop counter.
+ * plist_for_each_entry_safe - iterate safely over list of given type
+ * @pos: the type * to use as a loop counter
* @n: another type * to use as temporary storage
- * @head: the head for your list.
- * @m: the name of the list_struct within the struct.
+ * @head: the head for your list
+ * @m: the name of the list_struct within the struct
+ *
+ * Iterate over list of given type, safe against removal of list entry.
*/
#define plist_for_each_entry_safe(pos, n, head, m) \
list_for_each_entry_safe(pos, n, &(head)->node_list, m.plist.node_list)
/**
* plist_head_empty - return !0 if a plist_head is empty
- *
* @head: &struct plist_head pointer
*/
static inline int plist_head_empty(const struct plist_head *head)
/**
* plist_node_empty - return !0 if plist_node is not on a list
- *
* @node: &struct plist_node pointer
*/
static inline int plist_node_empty(const struct plist_node *node)
/**
* plist_first_entry - get the struct for the first entry
- *
- * @ptr: the &struct plist_head pointer.
- * @type: the type of the struct this is embedded in.
- * @member: the name of the list_struct within the struct.
+ * @head: the &struct plist_head pointer
+ * @type: the type of the struct this is embedded in
+ * @member: the name of the list_struct within the struct
*/
#ifdef CONFIG_DEBUG_PI_LIST
# define plist_first_entry(head, type, member) \
/**
* plist_first - return the first node (and thus, highest priority)
- *
* @head: the &struct plist_head pointer
*
* Assumes the plist is _not_ empty.
* for reporting to userspace and storing
* in superblock.
*/
+ struct work_struct del_work; /* used for delayed sysfs removal */
};
struct mddev_s
void (*d_instantiate) (struct dentry *dentry, struct inode *inode);
- int (*getprocattr)(struct task_struct *p, char *name, void *value, size_t size);
+ int (*getprocattr)(struct task_struct *p, char *name, char **value);
int (*setprocattr)(struct task_struct *p, char *name, void *value, size_t size);
int (*secid_to_secctx)(u32 secid, char **secdata, u32 *seclen);
void (*release_secctx)(char *secdata, u32 seclen);
security_ops->d_instantiate (dentry, inode);
}
-static inline int security_getprocattr(struct task_struct *p, char *name, void *value, size_t size)
+static inline int security_getprocattr(struct task_struct *p, char *name, char **value)
{
- return security_ops->getprocattr(p, name, value, size);
+ return security_ops->getprocattr(p, name, value);
}
static inline int security_setprocattr(struct task_struct *p, char *name, void *value, size_t size)
static inline void security_d_instantiate (struct dentry *dentry, struct inode *inode)
{ }
-static inline int security_getprocattr(struct task_struct *p, char *name, void *value, size_t size)
+static inline int security_getprocattr(struct task_struct *p, char *name, char **value)
{
return -EINVAL;
}
* @sk: Socket we are owned by
* @tstamp: Time we arrived
* @dev: Device we arrived on/are leaving by
- * @input_dev: Device we arrived on
+ * @iif: ifindex of device we arrived on
* @h: Transport layer header
* @nh: Network layer header
* @mac: Link layer header
struct sock *sk;
struct skb_timeval tstamp;
struct net_device *dev;
- struct net_device *input_dev;
+ int iif;
+ /* 4 byte hole on 64 bit*/
union {
struct tcphdr *th;
return __alloc_skb(size, priority, 1, -1);
}
-extern struct sk_buff *alloc_skb_from_cache(struct kmem_cache *cp,
- unsigned int size,
- gfp_t priority);
extern void kfree_skbmem(struct sk_buff *skb);
extern struct sk_buff *skb_clone(struct sk_buff *skb,
gfp_t priority);
list->qlen = 0;
}
+static inline void skb_queue_head_init_class(struct sk_buff_head *list,
+ struct lock_class_key *class)
+{
+ skb_queue_head_init(list);
+ lockdep_set_class(&list->lock, class);
+}
+
/*
* Insert an sk_buff at the start of a list.
*
spinlock_t lock;
struct list_head queue;
u8 busy;
- u8 shutdown;
u8 use_dma;
struct spi_master *master;
NET_IPV6_RTR_PROBE_INTERVAL=21,
NET_IPV6_ACCEPT_RA_RT_INFO_MAX_PLEN=22,
NET_IPV6_PROXY_NDP=23,
+ NET_IPV6_ACCEPT_SOURCE_ROUTE=25,
__NET_IPV6_MAX
};
#define _SYSFS_H_
#include <linux/compiler.h>
+#include <linux/errno.h>
#include <linux/list.h>
#include <asm/atomic.h>
#ifdef CONFIG_SYSFS
+extern int sysfs_schedule_callback(struct kobject *kobj,
+ void (*func)(void *), void *data);
+
extern int __must_check
sysfs_create_dir(struct kobject *, struct dentry *);
#else /* CONFIG_SYSFS */
+static inline int sysfs_schedule_callback(struct kobject *kobj,
+ void (*func)(void *), void *data)
+{
+ return -ENOSYS;
+}
+
static inline int sysfs_create_dir(struct kobject * k, struct dentry *shadow)
{
return 0;
*/
-#define TASKSTATS_VERSION 3
+#define TASKSTATS_VERSION 4
#define TS_COMM_LEN 32 /* should be >= TASK_COMM_LEN
* in linux/sched.h */
/* Delay waiting for cpu, while runnable
* count, delay_total NOT updated atomically
*/
- __u64 cpu_count;
+ __u64 cpu_count __attribute__((aligned(8)));
__u64 cpu_delay_total;
/* Following four fields atomically updated using task->delays->lock */
/* Basic Accounting Fields start */
char ac_comm[TS_COMM_LEN]; /* Command name */
- __u8 ac_sched; /* Scheduling discipline */
+ __u8 ac_sched __attribute__((aligned(8)));
+ /* Scheduling discipline */
__u8 ac_pad[3];
- __u32 ac_uid; /* User ID */
+ __u32 ac_uid __attribute__((aligned(8)));
+ /* User ID */
__u32 ac_gid; /* Group ID */
__u32 ac_pid; /* Process ID */
__u32 ac_ppid; /* Parent process ID */
__u32 ac_btime; /* Begin time [sec since 1970] */
- __u64 ac_etime; /* Elapsed time [usec] */
+ __u64 ac_etime __attribute__((aligned(8)));
+ /* Elapsed time [usec] */
__u64 ac_utime; /* User CPU time [usec] */
__u64 ac_stime; /* SYstem CPU time [usec] */
__u64 ac_minflt; /* Minor Page Fault Count */
__fs32 ui_blksize; /* 12: Inode blocksize. */
__fs64 ui_size; /* 16: File byte count. */
__fs64 ui_blocks; /* 24: Bytes actually held. */
- struct ufs_timeval ui_atime; /* 32: Last access time. */
- struct ufs_timeval ui_mtime; /* 40: Last modified time. */
- struct ufs_timeval ui_ctime; /* 48: Last inode change time. */
- struct ufs_timeval ui_birthtime; /* 56: Inode creation time. */
+ __fs64 ui_atime; /* 32: Last access time. */
+ __fs64 ui_mtime; /* 40: Last modified time. */
+ __fs64 ui_ctime; /* 48: Last inode change time. */
+ __fs64 ui_birthtime; /* 56: Inode creation time. */
__fs32 ui_mtimensec; /* 64: Last modified time. */
__fs32 ui_atimensec; /* 68: Last access time. */
__fs32 ui_ctimensec; /* 72: Last inode change time. */
static inline int copy_utsname(int flags, struct task_struct *tsk)
{
+ if (flags & CLONE_NEWUTS)
+ return -EINVAL;
return 0;
}
static inline void put_uts_ns(struct uts_namespace *ns)
#define CON_BUF_SIZE (CONFIG_BASE_SMALL ? 256 : PAGE_SIZE)
extern char con_buf[CON_BUF_SIZE];
extern struct semaphore con_buf_sem;
+extern char vt_dont_switch;
struct vt_spawn_console {
spinlock_t lock;
/*
* This file define a set of standard wireless extensions
*
- * Version : 21 14.3.06
+ * Version : 22 16.3.07
*
* Authors : Jean Tourrilhes - HPL - <jt@hpl.hp.com>
- * Copyright (c) 1997-2006 Jean Tourrilhes, All Rights Reserved.
+ * Copyright (c) 1997-2007 Jean Tourrilhes, All Rights Reserved.
*/
#ifndef _LINUX_WIRELESS_H
* (there is some stuff that will be added in the future...)
* I just plan to increment with each new version.
*/
-#define WIRELESS_EXT 21
+#define WIRELESS_EXT 22
/*
* Changes :
* - Add IW_RETRY_SHORT/IW_RETRY_LONG retry modifiers
* - Power/Retry relative values no longer * 100000
* - Add explicit flag to tell stats are in 802.11k RCPI : IW_QUAL_RCPI
+ *
+ * V21 to V22
+ * ----------
+ * - Prevent leaking of kernel space in stream on 64 bits.
*/
/**************************** CONSTANTS ****************************/
#define IW_EV_POINT_LEN (IW_EV_LCP_LEN + sizeof(struct iw_point) - \
IW_EV_POINT_OFF)
+/* Size of the Event prefix when packed in stream */
+#define IW_EV_LCP_PK_LEN (4)
+/* Size of the various events when packed in stream */
+#define IW_EV_CHAR_PK_LEN (IW_EV_LCP_PK_LEN + IFNAMSIZ)
+#define IW_EV_UINT_PK_LEN (IW_EV_LCP_PK_LEN + sizeof(__u32))
+#define IW_EV_FREQ_PK_LEN (IW_EV_LCP_PK_LEN + sizeof(struct iw_freq))
+#define IW_EV_PARAM_PK_LEN (IW_EV_LCP_PK_LEN + sizeof(struct iw_param))
+#define IW_EV_ADDR_PK_LEN (IW_EV_LCP_PK_LEN + sizeof(struct sockaddr))
+#define IW_EV_QUAL_PK_LEN (IW_EV_LCP_PK_LEN + sizeof(struct iw_quality))
+#define IW_EV_POINT_PK_LEN (IW_EV_LCP_LEN + 4)
+
#endif /* _LINUX_WIRELESS_H */
#define SAA7146_HPS_SYNC_PORT_B 0x01
/* some memory sizes */
-#define SAA7146_CLIPPING_MEM (14*PAGE_SIZE)
+/* max. 16 clipping rectangles */
+#define SAA7146_CLIPPING_MEM (16 * 4 * sizeof(u32))
/* some defines for the various clipping-modes */
#define SAA7146_CLIPPING_RECT 0x4
int family;
struct list_head list;
int rule_size;
+ int addr_size;
int (*action)(struct fib_rule *,
struct flowi *, int,
__u16 fn_bit; /* bit key */
__u16 fn_flags;
__u32 fn_sernum;
+ struct rt6_info *rr_ptr;
};
#ifndef CONFIG_IPV6_SUBTREES
/*
* This file define the new driver API for Wireless Extensions
*
- * Version : 7 18.3.05
+ * Version : 8 16.3.07
*
* Authors : Jean Tourrilhes - HPL - <jt@hpl.hp.com>
- * Copyright (c) 2001-2006 Jean Tourrilhes, All Rights Reserved.
+ * Copyright (c) 2001-2007 Jean Tourrilhes, All Rights Reserved.
*/
#ifndef _IW_HANDLER_H
* will be needed...
* I just plan to increment with each new version.
*/
-#define IW_HANDLER_VERSION 7
+#define IW_HANDLER_VERSION 8
/*
* Changes :
* - Remove (struct iw_point *)->pointer from events and streams
* - Remove spy_offset from struct iw_handler_def
* - Add "check" version of event macros for ieee802.11 stack
+ *
+ * V7 to V8
+ * ----------
+ * - Prevent leaking of kernel space in stream on 64 bits.
*/
/**************************** CONSTANTS ****************************/
/* Check if it's possible */
if(likely((stream + event_len) < ends)) {
iwe->len = event_len;
- memcpy(stream, (char *) iwe, event_len);
+ /* Beware of alignement issues on 64 bits */
+ memcpy(stream, (char *) iwe, IW_EV_LCP_PK_LEN);
+ memcpy(stream + IW_EV_LCP_LEN,
+ ((char *) iwe) + IW_EV_LCP_LEN,
+ event_len - IW_EV_LCP_LEN);
stream += event_len;
}
return stream;
/* Check if it's possible */
if(likely((stream + event_len) < ends)) {
iwe->len = event_len;
- memcpy(stream, (char *) iwe, IW_EV_LCP_LEN);
+ memcpy(stream, (char *) iwe, IW_EV_LCP_PK_LEN);
memcpy(stream + IW_EV_LCP_LEN,
((char *) iwe) + IW_EV_LCP_LEN + IW_EV_POINT_OFF,
- IW_EV_POINT_LEN - IW_EV_LCP_LEN);
+ IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN);
memcpy(stream + IW_EV_POINT_LEN, extra, iwe->u.data.length);
stream += event_len;
}
/* Check if it's possible, set error if not */
if(likely((stream + event_len) < ends)) {
iwe->len = event_len;
- memcpy(stream, (char *) iwe, event_len);
+ /* Beware of alignement issues on 64 bits */
+ memcpy(stream, (char *) iwe, IW_EV_LCP_PK_LEN);
+ memcpy(stream + IW_EV_LCP_LEN,
+ ((char *) iwe) + IW_EV_LCP_LEN,
+ event_len - IW_EV_LCP_LEN);
stream += event_len;
} else
*perr = -E2BIG;
/* Check if it's possible */
if(likely((stream + event_len) < ends)) {
iwe->len = event_len;
- memcpy(stream, (char *) iwe, IW_EV_LCP_LEN);
+ memcpy(stream, (char *) iwe, IW_EV_LCP_PK_LEN);
memcpy(stream + IW_EV_LCP_LEN,
((char *) iwe) + IW_EV_LCP_LEN + IW_EV_POINT_OFF,
- IW_EV_POINT_LEN - IW_EV_LCP_LEN);
+ IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN);
memcpy(stream + IW_EV_POINT_LEN, extra, iwe->u.data.length);
stream += event_len;
} else
struct net_device *dev;
struct neigh_parms *next;
int (*neigh_setup)(struct neighbour *);
- void (*neigh_destructor)(struct neighbour *);
+ void (*neigh_cleanup)(struct neighbour *);
struct neigh_table *tbl;
void *sysctl_table;
static inline int
tcf_match_indev(struct sk_buff *skb, char *indev)
{
+ struct net_device *dev;
+
if (indev[0]) {
- if (!skb->input_dev)
+ if (!skb->iif)
return 0;
- if (strcmp(indev, skb->input_dev->name))
+ dev = __dev_get_by_index(skb->iif);
+ if (!dev || strcmp(indev, dev->name))
return 0;
}
void sctp_transport_raise_cwnd(struct sctp_transport *, __u32, __u32);
void sctp_transport_lower_cwnd(struct sctp_transport *, sctp_lower_cwnd_t);
unsigned long sctp_transport_timeout(struct sctp_transport *);
+void sctp_transport_reset(struct sctp_transport *);
/* This is the structure we use to queue packets as they come into
/* Prototypes. */
struct sctp_ulpq *sctp_ulpq_init(struct sctp_ulpq *,
struct sctp_association *);
+void sctp_ulpq_flush(struct sctp_ulpq *ulpq);
void sctp_ulpq_free(struct sctp_ulpq *);
/* Add a new DATA chunk for processing. */
/* include/version.h. Generated by alsa/ksync script. */
#define CONFIG_SND_VERSION "1.0.14rc3"
-#define CONFIG_SND_DATE " (Tue Mar 06 13:10:00 2007 UTC)"
+#define CONFIG_SND_DATE " (Wed Mar 14 07:25:50 2007 UTC)"
shm_exit_ns(ns);
kfree(ns);
}
+#else
+int copy_ipcs(unsigned long flags, struct task_struct *tsk)
+{
+ if (flags & CLONE_NEWIPC)
+ return -EINVAL;
+ return 0;
+}
#endif
/**
void audit_log_task_context(struct audit_buffer *ab)
{
char *ctx = NULL;
- ssize_t len = 0;
+ unsigned len;
+ int error;
+ u32 sid;
+
+ selinux_get_task_sid(current, &sid);
+ if (!sid)
+ return;
- len = security_getprocattr(current, "current", NULL, 0);
- if (len < 0) {
- if (len != -EINVAL)
+ error = selinux_sid_to_string(sid, &ctx, &len);
+ if (error) {
+ if (error != -EINVAL)
goto error_path;
return;
}
- ctx = kmalloc(len, GFP_KERNEL);
- if (!ctx)
- goto error_path;
-
- len = security_getprocattr(current, "current", ctx, len);
- if (len < 0 )
- goto error_path;
-
audit_log_format(ab, " subj=%s", ctx);
+ kfree(ctx);
return;
error_path:
- kfree(ctx);
audit_panic("error in audit_log_task_context");
return;
}
}
#ifdef CONFIG_SUSPEND_SMP
+/* Needed to prevent the microcode driver from requesting firmware in its CPU
+ * hotplug notifier during the suspend/resume.
+ */
+int suspend_cpu_hotplug;
+EXPORT_SYMBOL(suspend_cpu_hotplug);
+
static cpumask_t frozen_cpus;
int disable_nonboot_cpus(void)
int cpu, first_cpu, error = 0;
mutex_lock(&cpu_add_remove_lock);
- first_cpu = first_cpu(cpu_present_map);
- if (!cpu_online(first_cpu)) {
- error = _cpu_up(first_cpu);
- if (error) {
- printk(KERN_ERR "Could not bring CPU%d up.\n",
- first_cpu);
- goto out;
- }
- }
-
+ suspend_cpu_hotplug = 1;
+ first_cpu = first_cpu(cpu_online_map);
/* We take down all of the non-boot CPUs in one shot to avoid races
* with the userspace trying to use the CPU hotplug at the same time
*/
} else {
printk(KERN_ERR "Non-boot CPUs are not disabled\n");
}
-out:
+ suspend_cpu_hotplug = 0;
mutex_unlock(&cpu_add_remove_lock);
return error;
}
/* Allow everyone to use the CPU hotplug again */
mutex_lock(&cpu_add_remove_lock);
cpu_hotplug_disabled = 0;
- mutex_unlock(&cpu_add_remove_lock);
if (cpus_empty(frozen_cpus))
- return;
+ goto out;
+ suspend_cpu_hotplug = 1;
printk("Enabling non-boot CPUs ...\n");
for_each_cpu_mask(cpu, frozen_cpus) {
- error = cpu_up(cpu);
+ error = _cpu_up(cpu);
if (!error) {
printk("CPU%d is up\n", cpu);
continue;
}
- printk(KERN_WARNING "Error taking CPU%d up: %d\n",
- cpu, error);
+ printk(KERN_WARNING "Error taking CPU%d up: %d\n", cpu, error);
}
cpus_clear(frozen_cpus);
+ suspend_cpu_hotplug = 0;
+out:
+ mutex_unlock(&cpu_add_remove_lock);
}
#endif
pgrp = task_pgrp(tsk);
if ((task_pgrp(t) != pgrp) &&
- (task_session(t) != task_session(tsk)) &&
+ (task_session(t) == task_session(tsk)) &&
will_become_orphaned_pgrp(pgrp, tsk) &&
has_stopped_jobs(pgrp)) {
__kill_pgrp_info(SIGHUP, SEND_SIG_PRIV, pgrp);
static inline void rt_mutex_init_task(struct task_struct *p)
{
-#ifdef CONFIG_RT_MUTEXES
spin_lock_init(&p->pi_lock);
+#ifdef CONFIG_RT_MUTEXES
plist_head_init(&p->pi_waiters, &p->pi_lock);
p->pi_blocked_on = NULL;
#endif
if (!pi_state)
return -EINVAL;
+ spin_lock(&pi_state->pi_mutex.wait_lock);
new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
/*
pi_state->owner = new_owner;
spin_unlock_irq(&new_owner->pi_lock);
+ spin_unlock(&pi_state->pi_mutex.wait_lock);
rt_mutex_unlock(&pi_state->pi_mutex);
return 0;
static void hrtimer_get_softirq_time(struct hrtimer_cpu_base *base)
{
ktime_t xtim, tomono;
- struct timespec xts;
+ struct timespec xts, tom;
unsigned long seq;
do {
#else
xts = xtime;
#endif
+ tom = wall_to_monotonic;
} while (read_seqretry(&xtime_lock, seq));
xtim = timespec_to_ktime(xts);
- tomono = timespec_to_ktime(wall_to_monotonic);
+ tomono = timespec_to_ktime(tom);
base->clock_base[CLOCK_REALTIME].softirq_time = xtim;
base->clock_base[CLOCK_MONOTONIC].softirq_time =
ktime_add(xtim, tomono);
on_each_cpu(retrigger_next_event, NULL, 0, 1);
}
+/*
+ * During resume we might have to reprogram the high resolution timer
+ * interrupt (on the local CPU):
+ */
+void hres_timers_resume(void)
+{
+ WARN_ON_ONCE(num_online_cpus() > 1);
+
+ /* Retrigger the CPU local events: */
+ retrigger_next_event(NULL);
+}
+
/*
* Check, whether the timer is on the callback pending list
*/
orun++;
}
timer->expires = ktime_add(timer->expires, interval);
+ /*
+ * Make sure, that the result did not wrap with a very large
+ * interval.
+ */
+ if (timer->expires.tv64 < 0)
+ timer->expires = ktime_set(KTIME_SEC_MAX, 0);
return orun;
}
timer_stats_hrtimer_set_start_info(timer);
- enqueue_hrtimer(timer, new_base, base == new_base);
+ /*
+ * Only allow reprogramming if the new base is on this CPU.
+ * (it might still be on another CPU if the timer was pending)
+ */
+ enqueue_hrtimer(timer, new_base,
+ new_base->cpu_base == &__get_cpu_var(hrtimer_bases));
unlock_hrtimer_base(timer, &flags);
rc = request_irq(irq, handler, irqflags, devname, dev_id);
if (rc) {
- kfree(dr);
+ devres_free(dr);
return rc;
}
int count = 10;
int unlock = 1;
+ if (unlikely(!debug_locks)) {
+ printk("INFO: lockdep is turned off.\n");
+ return;
+ }
printk("\nShowing all locks held in the system:\n");
/*
void debug_show_held_locks(struct task_struct *task)
{
+ if (unlikely(!debug_locks)) {
+ printk("INFO: lockdep is turned off.\n");
+ return;
+ }
lockdep_print_held_locks(task);
}
/* Lookup built-in module entry in /sys/modules */
mkobj = kset_find_obj(&module_subsys.kset, drv->mod_name);
- if (mkobj)
+ if (mkobj) {
mk = container_of(mkobj, struct module_kobject, kobj);
+ /* remember our module structure */
+ drv->mkobj = mk;
+ /* kset_find_obj took a reference */
+ kobject_put(mkobj);
+ }
}
if (!mk)
void module_remove_driver(struct device_driver *drv)
{
+ struct module_kobject *mk = NULL;
char *driver_name;
if (!drv)
return;
sysfs_remove_link(&drv->kobj, "module");
- if (drv->owner && drv->owner->mkobj.drivers_dir) {
+
+ if (drv->owner)
+ mk = &drv->owner->mkobj;
+ else if (drv->mkobj)
+ mk = drv->mkobj;
+ if (mk && mk->drivers_dir) {
driver_name = make_driver_name(drv);
if (driver_name) {
- sysfs_remove_link(drv->owner->mkobj.drivers_dir,
- driver_name);
+ sysfs_remove_link(mk->drivers_dir, driver_name);
kfree(driver_name);
}
}
{
struct kparam_string *kps = kp->arg;
+ if (!val) {
+ printk(KERN_ERR "%s: missing param set value\n", kp->name);
+ return -EINVAL;
+ }
if (strlen(val)+1 > kps->maxlen) {
printk(KERN_ERR "%s: string doesn't fit in %u chars.\n",
kp->name, kps->maxlen-1);
return 1;
}
- set_console(SUSPEND_CONSOLE);
+ if (set_console(SUSPEND_CONSOLE)) {
+ /*
+ * We're unable to switch to the SUSPEND_CONSOLE.
+ * Let the calling function know so it can decide
+ * what to do.
+ */
+ release_console_sem();
+ return 1;
+ }
release_console_sem();
if (vt_waitactive(SUSPEND_CONSOLE)) {
goto Done;
}
- error = platform_prepare();
- if (error) {
- swsusp_free();
- goto Thaw;
- }
-
pr_debug("PM: Reading swsusp image.\n");
error = swsusp_read();
enable_nonboot_cpus();
Free:
swsusp_free();
- platform_finish();
device_resume();
resume_console();
Thaw:
size += highmem_size;
for_each_zone (zone)
if (populated_zone(zone)) {
+ tmp += snapshot_additional_pages(zone);
if (is_highmem(zone)) {
highmem_size -=
zone_page_state(zone, NR_FREE_PAGES);
} else {
tmp -= zone_page_state(zone, NR_FREE_PAGES);
tmp += zone->lowmem_reserve[ZONE_NORMAL];
- tmp += snapshot_additional_pages(zone);
}
}
if (error) {
printk(KERN_ERR "Failed to suspend some devices.\n");
} else {
- /* Enter S3, system is already frozen */
- suspend_enter(PM_SUSPEND_MEM);
-
+ error = disable_nonboot_cpus();
+ if (!error) {
+ /* Enter S3, system is already frozen */
+ suspend_enter(PM_SUSPEND_MEM);
+ enable_nonboot_cpus();
+ }
/* Wake up devices */
device_resume();
}
return retval;
}
-static inline struct task_struct *eldest_child(struct task_struct *p)
-{
- if (list_empty(&p->children))
- return NULL;
- return list_entry(p->children.next,struct task_struct,sibling);
-}
-
-static inline struct task_struct *older_sibling(struct task_struct *p)
-{
- if (p->sibling.prev==&p->parent->children)
- return NULL;
- return list_entry(p->sibling.prev,struct task_struct,sibling);
-}
-
-static inline struct task_struct *younger_sibling(struct task_struct *p)
-{
- if (p->sibling.next==&p->parent->children)
- return NULL;
- return list_entry(p->sibling.next,struct task_struct,sibling);
-}
-
static const char stat_nam[] = "RSDTtZX";
static void show_task(struct task_struct *p)
{
- struct task_struct *relative;
unsigned long free = 0;
unsigned state;
free = (unsigned long)n - (unsigned long)end_of_stack(p);
}
#endif
- printk("%5lu %5d %6d ", free, p->pid, p->parent->pid);
- if ((relative = eldest_child(p)))
- printk("%5d ", relative->pid);
- else
- printk(" ");
- if ((relative = younger_sibling(p)))
- printk("%7d", relative->pid);
- else
- printk(" ");
- if ((relative = older_sibling(p)))
- printk(" %5d", relative->pid);
- else
- printk(" ");
+ printk("%5lu %5d %6d", free, p->pid, p->parent->pid);
if (!p->mm)
printk(" (L-TLB)\n");
else
{
int op;
- if (!capable(CAP_SYS_ADMIN))
+ if (write && !capable(CAP_SYS_ADMIN))
return -EPERM;
op = OP_OR;
(((u64)usec * USEC_CONVERSION + USEC_ROUND) >>
(USEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC;
}
+EXPORT_SYMBOL(timeval_to_jiffies);
void jiffies_to_timeval(const unsigned long jiffies, struct timeval *value)
{
tv_usec /= NSEC_PER_USEC;
value->tv_usec = tv_usec;
}
+EXPORT_SYMBOL(jiffies_to_timeval);
/*
* Convert jiffies/jiffies_64 to clock_t and back.
}
EXPORT_SYMBOL_GPL(clockevents_notify);
-#ifdef CONFIG_SYSFS
-
-/**
- * clockevents_show_registered - sysfs interface for listing clockevents
- * @dev: unused
- * @buf: char buffer to be filled with clock events list
- *
- * Provides sysfs interface for listing registered clock event devices
- */
-static ssize_t clockevents_show_registered(struct sys_device *dev, char *buf)
-{
- struct list_head *tmp;
- char *p = buf;
- int cpu;
-
- spin_lock(&clockevents_lock);
-
- list_for_each(tmp, &clockevent_devices) {
- struct clock_event_device *ce;
-
- ce = list_entry(tmp, struct clock_event_device, list);
- p += sprintf(p, "%-20s F:%04x M:%d", ce->name,
- ce->features, ce->mode);
- p += sprintf(p, " C:");
- if (!cpus_equal(ce->cpumask, cpu_possible_map)) {
- for_each_cpu_mask(cpu, ce->cpumask)
- p += sprintf(p, " %d", cpu);
- } else {
- /*
- * FIXME: Add the cpu which is handling this sucker
- */
- }
- p += sprintf(p, "\n");
- }
-
- spin_unlock(&clockevents_lock);
-
- return p - buf;
-}
-
-/*
- * Sysfs setup bits:
- */
-static SYSDEV_ATTR(registered, 0600,
- clockevents_show_registered, NULL);
-
-static struct sysdev_class clockevents_sysclass = {
- set_kset_name("clockevents"),
-};
-
-static struct sys_device clockevents_sys_device = {
- .id = 0,
- .cls = &clockevents_sysclass,
-};
-
-static int __init clockevents_sysfs_init(void)
-{
- int error = sysdev_class_register(&clockevents_sysclass);
-
- if (!error)
- error = sysdev_register(&clockevents_sys_device);
- if (!error)
- error = sysdev_create_file(
- &clockevents_sys_device,
- &attr_registered);
- return error;
-}
-device_initcall(clockevents_sysfs_init);
-#endif
watchdog_timer.expires = jiffies + WATCHDOG_INTERVAL;
add_timer(&watchdog_timer);
}
- } else if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS) {
+ } else {
+ if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS)
cs->flags |= CLOCK_SOURCE_VALID_FOR_HRES;
if (!watchdog || cs->rating > watchdog->rating) {
return clocksource_register(&clocksource_jiffies);
}
-module_init(init_jiffies_clocksource);
+core_initcall(init_jiffies_clocksource);
/* TIME_ERROR prevents overwriting the CMOS clock */
static int time_state = TIME_OK; /* clock synchronization status */
int time_status = STA_UNSYNC; /* clock status bits */
-static long time_offset; /* time adjustment (ns) */
+static s64 time_offset; /* time adjustment (ns) */
static long time_constant = 2; /* pll time constant */
long time_maxerror = NTP_PHASE_LIMIT; /* maximum error (us) */
long time_esterror = NTP_PHASE_LIMIT; /* estimated error (us) */
*/
int do_adjtimex(struct timex *txc)
{
- long ltemp, mtemp, save_adjust;
+ long mtemp, save_adjust, rem;
s64 freq_adj, temp64;
int result;
time_adjust = txc->offset;
}
else if (time_status & STA_PLL) {
- ltemp = txc->offset * NSEC_PER_USEC;
+ time_offset = txc->offset * NSEC_PER_USEC;
/*
* Scale the phase adjustment and
* clamp to the operating range.
*/
- time_offset = min(ltemp, MAXPHASE * NSEC_PER_USEC);
- time_offset = max(time_offset, -MAXPHASE * NSEC_PER_USEC);
+ time_offset = min(time_offset, (s64)MAXPHASE * NSEC_PER_USEC);
+ time_offset = max(time_offset, (s64)-MAXPHASE * NSEC_PER_USEC);
/*
* Select whether the frequency is to be controlled
mtemp = xtime.tv_sec - time_reftime;
time_reftime = xtime.tv_sec;
- freq_adj = (s64)time_offset * mtemp;
+ freq_adj = time_offset * mtemp;
freq_adj = shift_right(freq_adj, time_constant * 2 +
(SHIFT_PLL + 2) * 2 - SHIFT_NSEC);
if (mtemp >= MINSEC && (time_status & STA_FLL || mtemp > MAXSEC)) {
- temp64 = (s64)time_offset << (SHIFT_NSEC - SHIFT_FLL);
+ temp64 = time_offset << (SHIFT_NSEC - SHIFT_FLL);
if (time_offset < 0) {
temp64 = -temp64;
do_div(temp64, mtemp);
freq_adj += time_freq;
freq_adj = min(freq_adj, (s64)MAXFREQ_NSEC);
time_freq = max(freq_adj, (s64)-MAXFREQ_NSEC);
- time_offset = (time_offset / NTP_INTERVAL_FREQ)
- << SHIFT_UPDATE;
+ time_offset = div_long_long_rem_signed(time_offset,
+ NTP_INTERVAL_FREQ,
+ &rem);
+ time_offset <<= SHIFT_UPDATE;
} /* STA_PLL */
} /* txc->modes & ADJ_OFFSET */
if (txc->modes & ADJ_TICK)
result = TIME_ERROR;
if ((txc->modes & ADJ_OFFSET_SINGLESHOT) == ADJ_OFFSET_SINGLESHOT)
- txc->offset = save_adjust;
+ txc->offset = save_adjust;
else
- txc->offset = shift_right(time_offset, SHIFT_UPDATE)
- * NTP_INTERVAL_FREQ / 1000;
- txc->freq = (time_freq / NSEC_PER_USEC)
- << (SHIFT_USEC - SHIFT_NSEC);
+ txc->offset = ((long)shift_right(time_offset, SHIFT_UPDATE)) *
+ NTP_INTERVAL_FREQ / 1000;
+ txc->freq = (time_freq / NSEC_PER_USEC) <<
+ (SHIFT_USEC - SHIFT_NSEC);
txc->maxerror = time_maxerror;
txc->esterror = time_esterror;
txc->status = time_status;
spin_lock_irqsave(&tick_broadcast_lock, flags);
bc = tick_broadcast_device.evtdev;
- if (bc) {
- if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC &&
- !cpus_empty(tick_broadcast_mask))
- tick_broadcast_start_periodic(bc);
- broadcast = cpu_isset(smp_processor_id(), tick_broadcast_mask);
+ if (bc) {
+ switch (tick_broadcast_device.mode) {
+ case TICKDEV_MODE_PERIODIC:
+ if(!cpus_empty(tick_broadcast_mask))
+ tick_broadcast_start_periodic(bc);
+ broadcast = cpu_isset(smp_processor_id(),
+ tick_broadcast_mask);
+ break;
+ case TICKDEV_MODE_ONESHOT:
+ broadcast = tick_resume_broadcast_oneshot(bc);
+ break;
+ }
}
spin_unlock_irqrestore(&tick_broadcast_lock, flags);
}
}
+int tick_resume_broadcast_oneshot(struct clock_event_device *bc)
+{
+ clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT);
+
+ if(!cpus_empty(tick_broadcast_oneshot_mask))
+ tick_broadcast_set_event(ktime_get(), 1);
+
+ return cpu_isset(smp_processor_id(), tick_broadcast_oneshot_mask);
+}
+
/*
* Reprogram the broadcast device:
*
spin_unlock_irqrestore(&tick_device_lock, flags);
}
-static void tick_suspend_periodic(void)
+static void tick_suspend(void)
{
struct tick_device *td = &__get_cpu_var(tick_cpu_device);
unsigned long flags;
spin_lock_irqsave(&tick_device_lock, flags);
- if (td->mode == TICKDEV_MODE_PERIODIC)
- clockevents_set_mode(td->evtdev, CLOCK_EVT_MODE_SHUTDOWN);
+ clockevents_set_mode(td->evtdev, CLOCK_EVT_MODE_SHUTDOWN);
spin_unlock_irqrestore(&tick_device_lock, flags);
}
-static void tick_resume_periodic(void)
+static void tick_resume(void)
{
struct tick_device *td = &__get_cpu_var(tick_cpu_device);
unsigned long flags;
spin_lock_irqsave(&tick_device_lock, flags);
if (td->mode == TICKDEV_MODE_PERIODIC)
tick_setup_periodic(td->evtdev, 0);
+ else
+ tick_resume_oneshot();
spin_unlock_irqrestore(&tick_device_lock, flags);
}
break;
case CLOCK_EVT_NOTIFY_SUSPEND:
- tick_suspend_periodic();
+ tick_suspend();
tick_suspend_broadcast();
break;
case CLOCK_EVT_NOTIFY_RESUME:
if (!tick_resume_broadcast())
- tick_resume_periodic();
+ tick_resume();
break;
default:
extern int tick_program_event(ktime_t expires, int force);
extern void tick_oneshot_notify(void);
extern int tick_switch_to_oneshot(void (*handler)(struct clock_event_device *));
-
+extern void tick_resume_oneshot(void);
# ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
extern void tick_broadcast_setup_oneshot(struct clock_event_device *bc);
extern void tick_broadcast_oneshot_control(unsigned long reason);
extern void tick_broadcast_switch_to_oneshot(void);
extern void tick_shutdown_broadcast_oneshot(unsigned int *cpup);
+extern int tick_resume_broadcast_oneshot(struct clock_event_device *bc);
# else /* BROADCAST */
static inline void tick_broadcast_setup_oneshot(struct clock_event_device *bc)
{
{
BUG();
}
+static inline void tick_resume_oneshot(void)
+{
+ BUG();
+}
static inline int tick_program_event(ktime_t expires, int force)
{
return 0;
}
static inline void tick_broadcast_oneshot_control(unsigned long reason) { }
static inline void tick_shutdown_broadcast_oneshot(unsigned int *cpup) { }
+static inline int tick_resume_broadcast_oneshot(struct clock_event_device *bc)
+{
+ return 0;
+}
#endif /* !TICK_ONESHOT */
/*
}
}
+/**
+ * tick_resume_onshot - resume oneshot mode
+ */
+void tick_resume_oneshot(void)
+{
+ struct tick_device *td = &__get_cpu_var(tick_cpu_device);
+ struct clock_event_device *dev = td->evtdev;
+
+ clockevents_set_mode(dev, CLOCK_EVT_MODE_ONESHOT);
+ tick_program_event(ktime_get(), 1);
+}
+
/**
* tick_setup_oneshot - setup the event device for oneshot mode (hres or nohz)
*/
return;
}
SEQ_printf(m, "%s\n", dev->name);
- SEQ_printf(m, " max_delta_ns: %ld\n", dev->max_delta_ns);
- SEQ_printf(m, " min_delta_ns: %ld\n", dev->min_delta_ns);
- SEQ_printf(m, " mult: %ld\n", dev->mult);
+ SEQ_printf(m, " max_delta_ns: %lu\n", dev->max_delta_ns);
+ SEQ_printf(m, " min_delta_ns: %lu\n", dev->min_delta_ns);
+ SEQ_printf(m, " mult: %lu\n", dev->mult);
SEQ_printf(m, " shift: %d\n", dev->shift);
SEQ_printf(m, " mode: %d\n", dev->mode);
SEQ_printf(m, " next_event: %Ld nsecs\n",
{
ktime_t hr_delta = hrtimer_get_next_event();
struct timespec tsdelta;
+ unsigned long delta;
if (hr_delta.tv64 == KTIME_MAX)
return expires;
- if (hr_delta.tv64 <= TICK_NSEC)
- return now;
+ /*
+ * Expired timer available, let it expire in the next tick
+ */
+ if (hr_delta.tv64 <= 0)
+ return now + 1;
tsdelta = ktime_to_timespec(hr_delta);
- now += timespec_to_jiffies(&tsdelta);
+ delta = timespec_to_jiffies(&tsdelta);
+ /*
+ * Take rounding errors in to account and make sure, that it
+ * expires in the next tick. Otherwise we go into an endless
+ * ping pong due to tick_nohz_stop_sched_tick() retriggering
+ * the timer softirq
+ */
+ if (delta < 1)
+ delta = 1;
+ now += delta;
if (time_before(now, expires))
return now;
return expires;
clockevents_notify(CLOCK_EVT_NOTIFY_RESUME, NULL);
/* Resume hrtimers */
- clock_was_set();
+ hres_timers_resume();
return 0;
}
}
EXPORT_SYMBOL(congestion_wait);
+long congestion_wait_interruptible(int rw, long timeout)
+{
+ long ret;
+ DEFINE_WAIT(wait);
+ wait_queue_head_t *wqh = &congestion_wqh[rw];
+
+ prepare_to_wait(wqh, &wait, TASK_INTERRUPTIBLE);
+ if (signal_pending(current))
+ ret = -ERESTARTSYS;
+ else
+ ret = io_schedule_timeout(timeout);
+ finish_wait(wqh, &wait);
+ return ret;
+}
+EXPORT_SYMBOL(congestion_wait_interruptible);
+
/**
* congestion_end - wake up sleepers on a congested backing_dev_info
* @rw: READ or WRITE
/*
* is destination page below bounce pfn?
*/
- if (page_to_pfn(page) < q->bounce_pfn)
+ if (page_to_pfn(page) <= q->bounce_pfn)
continue;
/*
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
ssize_t retval;
- size_t write_len = 0;
+ size_t write_len;
+ pgoff_t end = 0; /* silence gcc */
/*
* If it's a write, unmap all mmappings of the file up-front. This
*/
if (rw == WRITE) {
write_len = iov_length(iov, nr_segs);
+ end = (offset + write_len - 1) >> PAGE_CACHE_SHIFT;
if (mapping_mapped(mapping))
unmap_mapping_range(mapping, offset, write_len, 0);
}
retval = filemap_write_and_wait(mapping);
- if (retval == 0) {
- retval = mapping->a_ops->direct_IO(rw, iocb, iov,
- offset, nr_segs);
- if (rw == WRITE && mapping->nrpages) {
- pgoff_t end = (offset + write_len - 1)
- >> PAGE_CACHE_SHIFT;
- int err = invalidate_inode_pages2_range(mapping,
+ if (retval)
+ goto out;
+
+ /*
+ * After a write we want buffered reads to be sure to go to disk to get
+ * the new data. We invalidate clean cached page from the region we're
+ * about to write. We do this *before* the write so that we can return
+ * -EIO without clobbering -EIOCBQUEUED from ->direct_IO().
+ */
+ if (rw == WRITE && mapping->nrpages) {
+ retval = invalidate_inode_pages2_range(mapping,
offset >> PAGE_CACHE_SHIFT, end);
- if (err)
- retval = err;
- }
+ if (retval)
+ goto out;
}
+
+ retval = mapping->a_ops->direct_IO(rw, iocb, iov, offset, nr_segs);
+ if (retval)
+ goto out;
+
+ /*
+ * Finally, try again to invalidate clean pages which might have been
+ * faulted in by get_user_pages() if the source of the write was an
+ * mmap()ed region of the file we're writing. That's a pretty crazy
+ * thing to do, so we don't support it 100%. If this invalidation
+ * fails and we have -EIOCBQUEUED we ignore the failure.
+ */
+ if (rw == WRITE && mapping->nrpages) {
+ int err = invalidate_inode_pages2_range(mapping,
+ offset >> PAGE_CACHE_SHIFT, end);
+ if (err && retval >= 0)
+ retval = err;
+ }
+out:
return retval;
}
#include <asm/tlbflush.h>
#include "filemap.h"
+/*
+ * We do use our own empty page to avoid interference with other users
+ * of ZERO_PAGE(), such as /dev/zero
+ */
+static struct page *__xip_sparse_page;
+
+static struct page *xip_sparse_page(void)
+{
+ if (!__xip_sparse_page) {
+ unsigned long zeroes = get_zeroed_page(GFP_HIGHUSER);
+ if (zeroes) {
+ static DEFINE_SPINLOCK(xip_alloc_lock);
+ spin_lock(&xip_alloc_lock);
+ if (!__xip_sparse_page)
+ __xip_sparse_page = virt_to_page(zeroes);
+ else
+ free_page(zeroes);
+ spin_unlock(&xip_alloc_lock);
+ }
+ }
+ return __xip_sparse_page;
+}
+
/*
* This is a file read routine for execute in place files, and uses
* the mapping->a_ops->get_xip_page() function for the actual low-level
* xip_write
*
* This function walks all vmas of the address_space and unmaps the
- * ZERO_PAGE when found at pgoff. Should it go in rmap.c?
+ * __xip_sparse_page when found at pgoff.
*/
static void
__xip_unmap (struct address_space * mapping,
spinlock_t *ptl;
struct page *page;
+ page = __xip_sparse_page;
+ if (!page)
+ return;
+
spin_lock(&mapping->i_mmap_lock);
vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
mm = vma->vm_mm;
address = vma->vm_start +
((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
BUG_ON(address < vma->vm_start || address >= vma->vm_end);
- page = ZERO_PAGE(0);
pte = page_check_address(page, mm, address, &ptl);
if (pte) {
/* Nuke the page table entry. */
+ area->vm_pgoff;
size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
- if (pgoff >= size) {
- return NULL;
- }
+ if (pgoff >= size)
+ return NOPAGE_SIGBUS;
page = mapping->a_ops->get_xip_page(mapping, pgoff*(PAGE_SIZE/512), 0);
- if (!IS_ERR(page)) {
+ if (!IS_ERR(page))
goto out;
- }
if (PTR_ERR(page) != -ENODATA)
- return NULL;
+ return NOPAGE_SIGBUS;
/* sparse block */
if ((area->vm_flags & (VM_WRITE | VM_MAYWRITE)) &&
page = mapping->a_ops->get_xip_page (mapping,
pgoff*(PAGE_SIZE/512), 1);
if (IS_ERR(page))
- return NULL;
+ return NOPAGE_SIGBUS;
/* unmap page at pgoff from all other vmas */
__xip_unmap(mapping, pgoff);
} else {
- /* not shared and writable, use ZERO_PAGE() */
- page = ZERO_PAGE(0);
+ /* not shared and writable, use xip_sparse_page() */
+ page = xip_sparse_page();
+ if (!page)
+ return NOPAGE_OOM;
}
out:
* Other filesystems return -ENOSYS.
*/
static long madvise_remove(struct vm_area_struct *vma,
+ struct vm_area_struct **prev,
unsigned long start, unsigned long end)
{
struct address_space *mapping;
- loff_t offset, endoff;
+ loff_t offset, endoff;
+ int error;
+
+ *prev = NULL; /* tell sys_madvise we drop mmap_sem */
if (vma->vm_flags & (VM_LOCKED|VM_NONLINEAR|VM_HUGETLB))
return -EINVAL;
+ ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
endoff = (loff_t)(end - vma->vm_start - 1)
+ ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
- return vmtruncate_range(mapping->host, offset, endoff);
+
+ /* vmtruncate_range needs to take i_mutex and i_alloc_sem */
+ up_write(¤t->mm->mmap_sem);
+ error = vmtruncate_range(mapping->host, offset, endoff);
+ down_write(¤t->mm->mmap_sem);
+ return error;
}
static long
error = madvise_behavior(vma, prev, start, end, behavior);
break;
case MADV_REMOVE:
- error = madvise_remove(vma, start, end);
+ error = madvise_remove(vma, prev, start, end);
break;
case MADV_WILLNEED:
if (error)
goto out;
start = tmp;
- if (start < prev->vm_end)
+ if (prev && start < prev->vm_end)
start = prev->vm_end;
error = unmapped_error;
if (start >= end)
goto out;
- vma = prev->vm_next;
+ if (prev)
+ vma = prev->vm_next;
+ else /* madvise_remove dropped mmap_sem */
+ vma = find_vma(current->mm, start);
}
out:
up_write(¤t->mm->mmap_sem);
void **pslot;
if (!mapping) {
- /* Anonymous page */
+ /* Anonymous page without mapping */
if (page_count(page) != 1)
return -EAGAIN;
return 0;
*/
__put_page(page);
+ /*
+ * If moved to a different zone then also account
+ * the page for that zone. Other VM counters will be
+ * taken care of when we establish references to the
+ * new page and drop references to the old page.
+ *
+ * Note that anonymous pages are accounted for
+ * via NR_FILE_PAGES and NR_ANON_PAGES if they
+ * are mapped to swap space.
+ */
+ __dec_zone_page_state(page, NR_FILE_PAGES);
+ __inc_zone_page_state(newpage, NR_FILE_PAGES);
+
write_unlock_irq(&mapping->tree_lock);
return 0;
EXPORT_SYMBOL(mem_map);
EXPORT_SYMBOL(__vm_enough_memory);
+EXPORT_SYMBOL(num_physpages);
/* list of shareable VMAs */
struct rb_root nommu_vma_tree = RB_ROOT;
unsigned long pglen = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
unsigned long vmpglen;
+ /* suppress VMA sharing for shared regions */
+ if (vm_flags & VM_SHARED &&
+ capabilities & BDI_CAP_MAP_DIRECT)
+ goto dont_share_VMAs;
+
for (rb = rb_first(&nommu_vma_tree); rb; rb = rb_next(rb)) {
vma = rb_entry(rb, struct vm_area_struct, vm_rb);
goto shared;
}
+ dont_share_VMAs:
vma = NULL;
/* obtain the address at which to make a shared mapping
}
EXPORT_SYMBOL(unmap_mapping_range);
+/*
+ * ask for an unmapped area at which to create a mapping on a file
+ */
+unsigned long get_unmapped_area(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long pgoff,
+ unsigned long flags)
+{
+ unsigned long (*get_area)(struct file *, unsigned long, unsigned long,
+ unsigned long, unsigned long);
+
+ get_area = current->mm->get_unmapped_area;
+ if (file && file->f_op && file->f_op->get_unmapped_area)
+ get_area = file->f_op->get_unmapped_area;
+
+ if (!get_area)
+ return -ENOSYS;
+
+ return get_area(file, addr, len, pgoff, flags);
+}
+
+EXPORT_SYMBOL(get_unmapped_area);
+
/*
* Check that a process has enough memory to allocate a new virtual
* mapping. 0 means there is enough memory for the allocation to
struct zone **z;
nodemask_t nodes;
int node;
+
+ nodes_clear(nodes);
/* node has memory ? */
for_each_online_node(node)
if (NODE_DATA(node)->node_present_pages)
* Don't kill the process if any threads are set to OOM_DISABLE
*/
do_each_thread(g, q) {
- if (q->mm == mm && p->oomkilladj == OOM_DISABLE)
+ if (q->mm == mm && q->oomkilladj == OOM_DISABLE)
return 1;
} while_each_thread(g, q);
*/
do_each_thread(g, q) {
if (q->mm == mm && q->tgid != p->tgid)
- force_sig(SIGKILL, p);
+ force_sig(SIGKILL, q);
} while_each_thread(g, q);
return 0;
struct address_space *mapping = page_mapping(page);
if (mapping)
ret = page_mkclean_file(mapping, page);
+ if (page_test_and_clear_dirty(page))
+ ret = 1;
}
- if (page_test_and_clear_dirty(page))
- ret = 1;
return ret;
}
/*
* shmem_free_swp - free some swap entries in a directory
*
- * @dir: pointer to the directory
- * @edir: pointer after last entry of the directory
+ * @dir: pointer to the directory
+ * @edir: pointer after last entry of the directory
+ * @punch_lock: pointer to spinlock when needed for the holepunch case
*/
-static int shmem_free_swp(swp_entry_t *dir, swp_entry_t *edir)
+static int shmem_free_swp(swp_entry_t *dir, swp_entry_t *edir,
+ spinlock_t *punch_lock)
{
+ spinlock_t *punch_unlock = NULL;
swp_entry_t *ptr;
int freed = 0;
for (ptr = dir; ptr < edir; ptr++) {
if (ptr->val) {
+ if (unlikely(punch_lock)) {
+ punch_unlock = punch_lock;
+ punch_lock = NULL;
+ spin_lock(punch_unlock);
+ if (!ptr->val)
+ continue;
+ }
free_swap_and_cache(*ptr);
*ptr = (swp_entry_t){0};
freed++;
}
}
+ if (punch_unlock)
+ spin_unlock(punch_unlock);
return freed;
}
-static int shmem_map_and_free_swp(struct page *subdir,
- int offset, int limit, struct page ***dir)
+static int shmem_map_and_free_swp(struct page *subdir, int offset,
+ int limit, struct page ***dir, spinlock_t *punch_lock)
{
swp_entry_t *ptr;
int freed = 0;
int size = limit - offset;
if (size > LATENCY_LIMIT)
size = LATENCY_LIMIT;
- freed += shmem_free_swp(ptr+offset, ptr+offset+size);
+ freed += shmem_free_swp(ptr+offset, ptr+offset+size,
+ punch_lock);
if (need_resched()) {
shmem_swp_unmap(ptr);
if (*dir) {
long nr_swaps_freed = 0;
int offset;
int freed;
- int punch_hole = 0;
+ int punch_hole;
+ spinlock_t *needs_lock;
+ spinlock_t *punch_lock;
+ unsigned long upper_limit;
inode->i_ctime = inode->i_mtime = CURRENT_TIME;
idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
info->flags |= SHMEM_TRUNCATE;
if (likely(end == (loff_t) -1)) {
limit = info->next_index;
+ upper_limit = SHMEM_MAX_INDEX;
info->next_index = idx;
+ needs_lock = NULL;
+ punch_hole = 0;
} else {
- limit = (end + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
- if (limit > info->next_index)
- limit = info->next_index;
+ if (end + 1 >= inode->i_size) { /* we may free a little more */
+ limit = (inode->i_size + PAGE_CACHE_SIZE - 1) >>
+ PAGE_CACHE_SHIFT;
+ upper_limit = SHMEM_MAX_INDEX;
+ } else {
+ limit = (end + 1) >> PAGE_CACHE_SHIFT;
+ upper_limit = limit;
+ }
+ needs_lock = &info->lock;
punch_hole = 1;
}
size = limit;
if (size > SHMEM_NR_DIRECT)
size = SHMEM_NR_DIRECT;
- nr_swaps_freed = shmem_free_swp(ptr+idx, ptr+size);
+ nr_swaps_freed = shmem_free_swp(ptr+idx, ptr+size, needs_lock);
}
/*
* If there are no indirect blocks or we are punching a hole
* below indirect blocks, nothing to be done.
*/
- if (!topdir || (punch_hole && (limit <= SHMEM_NR_DIRECT)))
+ if (!topdir || limit <= SHMEM_NR_DIRECT)
goto done2;
- BUG_ON(limit <= SHMEM_NR_DIRECT);
+ /*
+ * The truncation case has already dropped info->lock, and we're safe
+ * because i_size and next_index have already been lowered, preventing
+ * access beyond. But in the punch_hole case, we still need to take
+ * the lock when updating the swap directory, because there might be
+ * racing accesses by shmem_getpage(SGP_CACHE), shmem_unuse_inode or
+ * shmem_writepage. However, whenever we find we can remove a whole
+ * directory page (not at the misaligned start or end of the range),
+ * we first NULLify its pointer in the level above, and then have no
+ * need to take the lock when updating its contents: needs_lock and
+ * punch_lock (either pointing to info->lock or NULL) manage this.
+ */
+
+ upper_limit -= SHMEM_NR_DIRECT;
limit -= SHMEM_NR_DIRECT;
idx = (idx > SHMEM_NR_DIRECT)? (idx - SHMEM_NR_DIRECT): 0;
offset = idx % ENTRIES_PER_PAGE;
if (*dir) {
diroff = ((idx - ENTRIES_PER_PAGEPAGE/2) %
ENTRIES_PER_PAGEPAGE) / ENTRIES_PER_PAGE;
- if (!diroff && !offset) {
- *dir = NULL;
+ if (!diroff && !offset && upper_limit >= stage) {
+ if (needs_lock) {
+ spin_lock(needs_lock);
+ *dir = NULL;
+ spin_unlock(needs_lock);
+ needs_lock = NULL;
+ } else
+ *dir = NULL;
nr_pages_to_free++;
list_add(&middir->lru, &pages_to_free);
}
}
stage = idx + ENTRIES_PER_PAGEPAGE;
middir = *dir;
- *dir = NULL;
- nr_pages_to_free++;
- list_add(&middir->lru, &pages_to_free);
+ if (punch_hole)
+ needs_lock = &info->lock;
+ if (upper_limit >= stage) {
+ if (needs_lock) {
+ spin_lock(needs_lock);
+ *dir = NULL;
+ spin_unlock(needs_lock);
+ needs_lock = NULL;
+ } else
+ *dir = NULL;
+ nr_pages_to_free++;
+ list_add(&middir->lru, &pages_to_free);
+ }
shmem_dir_unmap(dir);
cond_resched();
dir = shmem_dir_map(middir);
diroff = 0;
}
+ punch_lock = needs_lock;
subdir = dir[diroff];
- if (subdir && page_private(subdir)) {
+ if (subdir && !offset && upper_limit-idx >= ENTRIES_PER_PAGE) {
+ if (needs_lock) {
+ spin_lock(needs_lock);
+ dir[diroff] = NULL;
+ spin_unlock(needs_lock);
+ punch_lock = NULL;
+ } else
+ dir[diroff] = NULL;
+ nr_pages_to_free++;
+ list_add(&subdir->lru, &pages_to_free);
+ }
+ if (subdir && page_private(subdir) /* has swap entries */) {
size = limit - idx;
if (size > ENTRIES_PER_PAGE)
size = ENTRIES_PER_PAGE;
freed = shmem_map_and_free_swp(subdir,
- offset, size, &dir);
+ offset, size, &dir, punch_lock);
if (!dir)
dir = shmem_dir_map(middir);
nr_swaps_freed += freed;
- if (offset)
+ if (offset || punch_lock) {
spin_lock(&info->lock);
- set_page_private(subdir, page_private(subdir) - freed);
- if (offset)
+ set_page_private(subdir,
+ page_private(subdir) - freed);
spin_unlock(&info->lock);
- if (!punch_hole)
- BUG_ON(page_private(subdir) > offset);
- }
- if (offset)
- offset = 0;
- else if (subdir && !page_private(subdir)) {
- dir[diroff] = NULL;
- nr_pages_to_free++;
- list_add(&subdir->lru, &pages_to_free);
+ } else
+ BUG_ON(page_private(subdir) != freed);
}
+ offset = 0;
}
done1:
shmem_dir_unmap(dir);
* generic_delete_inode did it, before we lowered next_index.
* Also, though shmem_getpage checks i_size before adding to
* cache, no recheck after: so fix the narrow window there too.
+ *
+ * Recalling truncate_inode_pages_range and unmap_mapping_range
+ * every time for punch_hole (which never got a chance to clear
+ * SHMEM_PAGEIN at the start of vmtruncate_range) is expensive,
+ * yet hardly ever necessary: try to optimize them out later.
*/
truncate_inode_pages_range(inode->i_mapping, start, end);
+ if (punch_hole)
+ unmap_mapping_range(inode->i_mapping, start,
+ end - start, 1);
}
spin_lock(&info->lock);
/* Print header */
if (lines == 0) {
printk(KERN_ERR
- "Slab corruption: start=%p, len=%d\n",
- realobj, size);
+ "Slab corruption: %s start=%p, len=%d\n",
+ cachep->name, realobj, size);
print_objinfo(cachep, objp, 0);
}
/* Hexdump the affected line */
} else {
vhdr->h_vlan_encapsulated_proto = htons(len);
}
+
+ skb->protocol = htons(ETH_P_8021Q);
+ skb->nh.raw = skb->data;
}
/* Before delegating work to the lower layer, enter our MAC-address */
/*
* Size check to see if ddp->deh_len was crap
* (Otherwise we'll detonate most spectacularly
- * in the middle of recvmsg()).
+ * in the middle of atalk_checksum() or recvmsg()).
*/
- if (skb->len < sizeof(*ddp))
+ if (skb->len < sizeof(*ddp) || skb->len < (len_hops & 1023)) {
+ pr_debug("AppleTalk: dropping corrupted frame (deh_len=%u, "
+ "skb->len=%u)\n", len_hops & 1023, skb->len);
goto freeit;
+ }
/*
* Any checksums. Note we don't do htons() on this == is assumed to be
spin_unlock_irqrestore(&PRIV(dev)->xoff_lock, flags);
}
-static void clip_neigh_destroy(struct neighbour *neigh)
-{
- DPRINTK("clip_neigh_destroy (neigh %p)\n", neigh);
- if (NEIGH2ENTRY(neigh)->vccs)
- printk(KERN_CRIT "clip_neigh_destroy: vccs != NULL !!!\n");
- NEIGH2ENTRY(neigh)->vccs = (void *) NEIGHBOR_DEAD;
-}
-
static void clip_neigh_solicit(struct neighbour *neigh, struct sk_buff *skb)
{
DPRINTK("clip_neigh_solicit (neigh %p, skb %p)\n", neigh, skb);
/* parameters are copied from ARP ... */
.parms = {
.tbl = &clip_tbl,
- .neigh_destructor = clip_neigh_destroy,
.base_reachable_time = 30 * HZ,
.retrans_time = 1 * HZ,
.gc_staletime = 60 * HZ,
#
# Amateur Radio protocols and AX.25 device configuration
#
-# 19971130 Now in an own category to make correct compilation of the
-# AX.25 stuff easier...
-# Joerg Reuter DL1BKE <jreuter@yaina.de>
-# 19980129 Moved to net/ax25/Config.in, sourcing device drivers.
menuconfig HAMRADIO
depends on NET
bool "Amateur Radio support"
help
If you want to connect your Linux box to an amateur radio, answer Y
- here. You want to read <http://www.tapr.org/tapr/html/pkthome.html> and
- the AX25-HOWTO, available from <http://www.tldp.org/docs.html#howto>.
+ here. You want to read <http://www.tapr.org/tapr/html/pkthome.html>
+ and more specifically about AX.25 on Linux
+ <http://www.linux-ax25.org/>.
Note that the answer to this question won't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about amateur radio.
comment "Packet Radio protocols"
- depends on HAMRADIO && NET
+ depends on HAMRADIO
config AX25
tristate "Amateur Radio AX.25 Level 2 protocol"
- depends on HAMRADIO && NET
- ---help---
+ depends on HAMRADIO
+ help
This is the protocol used for computer communication over amateur
radio. It is either used by itself for point-to-point links, or to
carry other protocols such as tcp/ip. To use it, you need a device
config AX25_DAMA_SLAVE
bool "AX.25 DAMA Slave support"
+ default y
depends on AX25
help
DAMA is a mechanism to prevent collisions when doing AX.25
from clients (called "slaves") and redistributes it to other slaves.
If you say Y here, your Linux box will act as a DAMA slave; this is
transparent in that you don't have to do any special DAMA
- configuration. (Linux cannot yet act as a DAMA server.) If unsure,
- say N.
+ configuration. Linux cannot yet act as a DAMA server. This option
+ only compiles DAMA slave support into the kernel. It still needs to
+ be enabled at runtime. For more about DAMA see
+ <http://www.linux-ax25.org>. If unsure, say Y.
+
+# placeholder until implemented
+config AX25_DAMA_MASTER
+ bool 'AX.25 DAMA Master support'
+ depends on AX25_DAMA_SLAVE && BROKEN
+ help
+ DAMA is a mechanism to prevent collisions when doing AX.25
+ networking. A DAMA server (called "master") accepts incoming traffic
+ from clients (called "slaves") and redistributes it to other slaves.
+ If you say Y here, your Linux box will act as a DAMA master; this is
+ transparent in that you don't have to do any special DAMA
+ configuration. Linux cannot yet act as a DAMA server. This option
+ only compiles DAMA slave support into the kernel. It still needs to
+ be explicitly enabled, so if unsure, say Y.
-# bool ' AX.25 DAMA Master support' CONFIG_AX25_DAMA_MASTER
config NETROM
tristate "Amateur Radio NET/ROM protocol"
depends on AX25
- ---help---
+ help
NET/ROM is a network layer protocol on top of AX.25 useful for
routing.
A comprehensive listing of all the software for Linux amateur radio
users as well as information about how to configure an AX.25 port is
- contained in the AX25-HOWTO, available from
- <http://www.tldp.org/docs.html#howto>. You also might want to
- check out the file <file:Documentation/networking/ax25.txt>. More
- information about digital amateur radio in general is on the WWW at
+ contained in the Linux Ham Wiki, available from
+ <http://www.linux-ax25.org>. You also might want to check out the
+ file <file:Documentation/networking/ax25.txt>. More information about
+ digital amateur radio in general is on the WWW at
<http://www.tapr.org/tapr/html/pkthome.html>.
To compile this driver as a module, choose M here: the
config ROSE
tristate "Amateur Radio X.25 PLP (Rose)"
depends on AX25
- ---help---
+ help
The Packet Layer Protocol (PLP) is a way to route packets over X.25
connections in general and amateur radio AX.25 connections in
particular, essentially an alternative to NET/ROM.
A comprehensive listing of all the software for Linux amateur radio
users as well as information about how to configure an AX.25 port is
- contained in the AX25-HOWTO, available from
- <http://www.tldp.org/docs.html#howto>. You also might want to
- check out the file <file:Documentation/networking/ax25.txt>. More
- information about digital amateur radio in general is on the WWW at
+ contained in the Linux Ham Wiki, available from
+ <http://www.linux-ax25.org>. You also might want to check out the
+ file <file:Documentation/networking/ax25.txt>. More information about
+ digital amateur radio in general is on the WWW at
<http://www.tapr.org/tapr/html/pkthome.html>.
To compile this driver as a module, choose M here: the
module will be called rose.
-
menu "AX.25 network device drivers"
- depends on HAMRADIO && NET && AX25!=n
+ depends on HAMRADIO && AX25
source "drivers/net/hamradio/Kconfig"
endmenu
-
return 0;
}
-static int inline hidp_send_ctrl_message(struct hidp_session *session,
+static inline int hidp_send_ctrl_message(struct hidp_session *session,
unsigned char hdr, unsigned char *data, int size)
{
int err;
{
}
+static const struct {
+ __u16 idVendor;
+ __u16 idProduct;
+ unsigned quirks;
+} hidp_blacklist[] = {
+ /* Apple wireless Mighty Mouse */
+ { 0x05ac, 0x030c, HID_QUIRK_MIGHTYMOUSE | HID_QUIRK_INVERT_HWHEEL },
+
+ { } /* Terminating entry */
+};
+
+static void hidp_setup_quirks(struct hid_device *hid)
+{
+ unsigned int n;
+
+ for (n = 0; hidp_blacklist[n].idVendor; n++)
+ if (hidp_blacklist[n].idVendor == le16_to_cpu(hid->vendor) &&
+ hidp_blacklist[n].idProduct == le16_to_cpu(hid->product))
+ hid->quirks = hidp_blacklist[n].quirks;
+}
+
static inline void hidp_setup_hid(struct hidp_session *session, struct hidp_connadd_req *req)
{
struct hid_device *hid = session->hid;
hid->hidinput_input_event = hidp_hidinput_event;
+ hidp_setup_quirks(hid);
+
list_for_each_entry(report, &hid->report_enum[HID_INPUT_REPORT].report_list, list)
hidp_send_report(session, report);
rcu_read_lock();
fdb = __br_fdb_get(br, addr);
- if (fdb)
- atomic_inc(&fdb->use_count);
+ if (fdb && !atomic_inc_not_zero(&fdb->use_count))
+ fdb = NULL;
rcu_read_unlock();
return fdb;
}
#define brnf_filter_vlan_tagged 1
#endif
-static __be16 inline vlan_proto(const struct sk_buff *skb)
+static inline __be16 vlan_proto(const struct sk_buff *skb)
{
return vlan_eth_hdr(skb)->h_vlan_encapsulated_proto;
}
/* called under bridge lock */
void br_stp_change_bridge_id(struct net_bridge *br, const unsigned char *addr)
{
- unsigned char oldaddr[6];
+ /* should be aligned on 2 bytes for compare_ether_addr() */
+ unsigned short oldaddr_aligned[ETH_ALEN >> 1];
+ unsigned char *oldaddr = (unsigned char *)oldaddr_aligned;
struct net_bridge_port *p;
int wasroot;
br_become_root_bridge(br);
}
-static const unsigned char br_mac_zero[6];
+/* should be aligned on 2 bytes for compare_ether_addr() */
+static const unsigned short br_mac_zero_aligned[ETH_ALEN >> 1];
/* called under bridge lock */
void br_stp_recalculate_bridge_id(struct net_bridge *br)
{
+ const unsigned char *br_mac_zero =
+ (const unsigned char *)br_mac_zero_aligned;
const unsigned char *addr = br_mac_zero;
struct net_bridge_port *p;
else
strlcpy(dev->name, newname, IFNAMSIZ);
- err = device_rename(&dev->dev, dev->name);
- if (!err) {
- hlist_del(&dev->name_hlist);
- hlist_add_head(&dev->name_hlist, dev_name_hash(dev->name));
- raw_notifier_call_chain(&netdev_chain,
- NETDEV_CHANGENAME, dev);
- }
+ device_rename(&dev->dev, dev->name);
+ hlist_del(&dev->name_hlist);
+ hlist_add_head(&dev->name_hlist, dev_name_hash(dev->name));
+ raw_notifier_call_chain(&netdev_chain, NETDEV_CHANGENAME, dev);
return err;
}
if (dev->qdisc_ingress) {
__u32 ttl = (__u32) G_TC_RTTL(skb->tc_verd);
if (MAX_RED_LOOP < ttl++) {
- printk(KERN_WARNING "Redir loop detected Dropping packet (%s->%s)\n",
- skb->input_dev->name, skb->dev->name);
+ printk(KERN_WARNING "Redir loop detected Dropping packet (%d->%d)\n",
+ skb->iif, skb->dev->ifindex);
return TC_ACT_SHOT;
}
skb->tc_verd = SET_TC_AT(skb->tc_verd,AT_INGRESS);
- spin_lock(&dev->ingress_lock);
+ spin_lock(&dev->queue_lock);
if ((q = dev->qdisc_ingress) != NULL)
result = q->enqueue(skb, q);
- spin_unlock(&dev->ingress_lock);
+ spin_unlock(&dev->queue_lock);
}
if (!skb->tstamp.off_sec)
net_timestamp(skb);
- if (!skb->input_dev)
- skb->input_dev = skb->dev;
+ if (!skb->iif)
+ skb->iif = skb->dev->ifindex;
orig_dev = skb_bond(skb);
}
}
- err = -ENETUNREACH;
+ err = -ESRCH;
out:
rcu_read_unlock();
EXPORT_SYMBOL_GPL(fib_rules_lookup);
+static int validate_rulemsg(struct fib_rule_hdr *frh, struct nlattr **tb,
+ struct fib_rules_ops *ops)
+{
+ int err = -EINVAL;
+
+ if (frh->src_len)
+ if (tb[FRA_SRC] == NULL ||
+ frh->src_len > (ops->addr_size * 8) ||
+ nla_len(tb[FRA_SRC]) != ops->addr_size)
+ goto errout;
+
+ if (frh->dst_len)
+ if (tb[FRA_DST] == NULL ||
+ frh->dst_len > (ops->addr_size * 8) ||
+ nla_len(tb[FRA_DST]) != ops->addr_size)
+ goto errout;
+
+ err = 0;
+errout:
+ return err;
+}
+
int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
struct fib_rule_hdr *frh = nlmsg_data(nlh);
if (err < 0)
goto errout;
+ err = validate_rulemsg(frh, tb, ops);
+ if (err < 0)
+ goto errout;
+
rule = kzalloc(ops->rule_size, GFP_KERNEL);
if (rule == NULL) {
err = -ENOMEM;
if (err < 0)
goto errout;
+ err = validate_rulemsg(frh, tb, ops);
+ if (err < 0)
+ goto errout;
+
list_for_each_entry(rule, ops->rules_list, list) {
if (frh->action && (frh->action != rule->action))
continue;
return -EAFNOSUPPORT;
rcu_read_lock();
- list_for_each_entry(rule, ops->rules_list, list) {
+ list_for_each_entry_rcu(rule, ops->rules_list, list) {
if (idx < cb->args[0])
goto skip;
n->dead = 1;
shrunk = 1;
write_unlock(&n->lock);
+ if (n->parms->neigh_cleanup)
+ n->parms->neigh_cleanup(n);
neigh_release(n);
continue;
}
NEIGH_PRINTK2("neigh %p is stray.\n", n);
}
write_unlock(&n->lock);
+ if (n->parms->neigh_cleanup)
+ n->parms->neigh_cleanup(n);
neigh_release(n);
}
}
kfree(hh);
}
- if (neigh->parms->neigh_destructor)
- (neigh->parms->neigh_destructor)(neigh);
-
skb_queue_purge(&neigh->arp_queue);
dev_put(neigh->dev);
*np = n->next;
n->dead = 1;
write_unlock(&n->lock);
+ if (n->parms->neigh_cleanup)
+ n->parms->neigh_cleanup(n);
neigh_release(n);
continue;
}
kfree(parms);
}
+static struct lock_class_key neigh_table_proxy_queue_class;
+
void neigh_table_init_no_netlink(struct neigh_table *tbl)
{
unsigned long now = jiffies;
init_timer(&tbl->proxy_timer);
tbl->proxy_timer.data = (unsigned long)tbl;
tbl->proxy_timer.function = neigh_proxy_process;
- skb_queue_head_init(&tbl->proxy_queue);
+ skb_queue_head_init_class(&tbl->proxy_queue,
+ &neigh_table_proxy_queue_class);
tbl->last_flush = now;
tbl->last_rand = now + tbl->parms.reachable_time * 20;
} else
np = &n->next;
write_unlock(&n->lock);
- if (release)
+ if (release) {
+ if (n->parms->neigh_cleanup)
+ n->parms->neigh_cleanup(n);
neigh_release(n);
+ }
}
}
}
if (skb->len < len || len < iph->ihl*4)
goto out;
+ /*
+ * Our transport medium may have padded the buffer out.
+ * Now We trim to the true length of the frame.
+ */
+ if (pskb_trim_rcsum(skb, len))
+ goto out;
+
if (iph->protocol != IPPROTO_UDP)
goto out;
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/capability.h>
+#include <linux/freezer.h>
#include <linux/delay.h>
#include <linux/timer.h>
#include <linux/list.h>
t->control &= ~(T_REMDEV);
}
+ try_to_freeze();
+
set_current_state(TASK_INTERRUPTIBLE);
}
if (err < 0)
goto errout;
- iw += IW_EV_POINT_OFF;
+ /* Payload is at an offset in buffer */
+ iw = iw_buf + IW_EV_POINT_OFF;
}
#endif /* CONFIG_NET_WIRELESS_RTNETLINK */
goto out;
}
-/**
- * alloc_skb_from_cache - allocate a network buffer
- * @cp: kmem_cache from which to allocate the data area
- * (object size must be big enough for @size bytes + skb overheads)
- * @size: size to allocate
- * @gfp_mask: allocation mask
- *
- * Allocate a new &sk_buff. The returned buffer has no headroom and
- * tail room of size bytes. The object has a reference count of one.
- * The return is the buffer. On a failure the return is %NULL.
- *
- * Buffers may only be allocated from interrupts using a @gfp_mask of
- * %GFP_ATOMIC.
- */
-struct sk_buff *alloc_skb_from_cache(struct kmem_cache *cp,
- unsigned int size,
- gfp_t gfp_mask)
-{
- struct sk_buff *skb;
- u8 *data;
-
- /* Get the HEAD */
- skb = kmem_cache_alloc(skbuff_head_cache,
- gfp_mask & ~__GFP_DMA);
- if (!skb)
- goto out;
-
- /* Get the DATA. */
- size = SKB_DATA_ALIGN(size);
- data = kmem_cache_alloc(cp, gfp_mask);
- if (!data)
- goto nodata;
-
- memset(skb, 0, offsetof(struct sk_buff, truesize));
- skb->truesize = size + sizeof(struct sk_buff);
- atomic_set(&skb->users, 1);
- skb->head = data;
- skb->data = data;
- skb->tail = data;
- skb->end = data + size;
-
- atomic_set(&(skb_shinfo(skb)->dataref), 1);
- skb_shinfo(skb)->nr_frags = 0;
- skb_shinfo(skb)->gso_size = 0;
- skb_shinfo(skb)->gso_segs = 0;
- skb_shinfo(skb)->gso_type = 0;
- skb_shinfo(skb)->frag_list = NULL;
-out:
- return skb;
-nodata:
- kmem_cache_free(skbuff_head_cache, skb);
- skb = NULL;
- goto out;
-}
-
/**
* __netdev_alloc_skb - allocate an skbuff for rx on a specific device
* @dev: network device to receive on
memcpy(n->cb, skb->cb, sizeof(skb->cb));
C(len);
C(data_len);
+ C(mac_len);
C(csum);
C(local_df);
n->cloned = 1;
n->tc_verd = SET_TC_VERD(skb->tc_verd,0);
n->tc_verd = CLR_TC_OK2MUNGE(n->tc_verd);
n->tc_verd = CLR_TC_MUNGED(n->tc_verd);
- C(input_dev);
+ C(iif);
#endif
skb_copy_secmark(n, skb);
#endif
*
* (We also register the sk_lock with the lock validator.)
*/
-static void inline sock_lock_init(struct sock *sk)
+static inline void sock_lock_init(struct sock *sk)
{
sock_lock_init_class_and_name(sk,
af_family_slock_key_strings[sk->sk_family],
* This file implement the Wireless Extensions APIs.
*
* Authors : Jean Tourrilhes - HPL - <jt@hpl.hp.com>
- * Copyright (c) 1997-2006 Jean Tourrilhes, All Rights Reserved.
+ * Copyright (c) 1997-2007 Jean Tourrilhes, All Rights Reserved.
*
* (As all part of the Linux kernel, this file is GPL)
*/
* o Change length in ESSID and NICK to strlen() instead of strlen()+1
* o Make standard_ioctl_num and standard_event_num unsigned
* o Remove (struct net_device *)->get_wireless_stats()
+ *
+ * v10 - 16.3.07 - Jean II
+ * o Prevent leaking of kernel space in stream on 64 bits.
*/
/***************************** INCLUDES *****************************/
IW_EV_QUAL_LEN, /* IW_HEADER_TYPE_QUAL */
};
+/* Size (in bytes) of various events, as packed */
+static const int event_type_pk_size[] = {
+ IW_EV_LCP_PK_LEN, /* IW_HEADER_TYPE_NULL */
+ 0,
+ IW_EV_CHAR_PK_LEN, /* IW_HEADER_TYPE_CHAR */
+ 0,
+ IW_EV_UINT_PK_LEN, /* IW_HEADER_TYPE_UINT */
+ IW_EV_FREQ_PK_LEN, /* IW_HEADER_TYPE_FREQ */
+ IW_EV_ADDR_PK_LEN, /* IW_HEADER_TYPE_ADDR */
+ 0,
+ IW_EV_POINT_PK_LEN, /* Without variable payload */
+ IW_EV_PARAM_PK_LEN, /* IW_HEADER_TYPE_PARAM */
+ IW_EV_QUAL_PK_LEN, /* IW_HEADER_TYPE_QUAL */
+};
+
/************************ COMMON SUBROUTINES ************************/
/*
* Stuff that may be used in various place or doesn't fit in one
memcpy(buffer + IW_EV_POINT_OFF, request, request_len);
/* Use our own copy of wrqu */
wrqu = (union iwreq_data *) (buffer + IW_EV_POINT_OFF
- + IW_EV_LCP_LEN);
+ + IW_EV_LCP_PK_LEN);
/* No extra arguments. Trivial to handle */
ret = handler(dev, &info, wrqu, NULL);
/* Get a temp copy of wrqu (skip pointer) */
memcpy(((char *) &wrqu_point) + IW_EV_POINT_OFF,
- ((char *) request) + IW_EV_LCP_LEN,
- IW_EV_POINT_LEN - IW_EV_LCP_LEN);
+ ((char *) request) + IW_EV_LCP_PK_LEN,
+ IW_EV_POINT_LEN - IW_EV_LCP_PK_LEN);
/* Calculate space needed by arguments. Always allocate
* for max space. Easier, and won't last long... */
(wrqu_point.data.length > descr->max_tokens))
extra_size = (wrqu_point.data.length
* descr->token_size);
- buffer_size = extra_size + IW_EV_POINT_LEN + IW_EV_POINT_OFF;
+ buffer_size = extra_size + IW_EV_POINT_PK_LEN + IW_EV_POINT_OFF;
#ifdef WE_RTNETLINK_DEBUG
printk(KERN_DEBUG "%s (WE.r) : Malloc %d bytes (%d bytes)\n",
dev->name, extra_size, buffer_size);
/* Put wrqu in the right place (just before extra).
* Leave space for IWE header and dummy pointer...
- * Note that IW_EV_LCP_LEN==4 bytes, so it's still aligned...
+ * Note that IW_EV_LCP_PK_LEN==4 bytes, so it's still aligned.
*/
- memcpy(buffer + IW_EV_LCP_LEN + IW_EV_POINT_OFF,
+ memcpy(buffer + IW_EV_LCP_PK_LEN + IW_EV_POINT_OFF,
((char *) &wrqu_point) + IW_EV_POINT_OFF,
- IW_EV_POINT_LEN - IW_EV_LCP_LEN);
- wrqu = (union iwreq_data *) (buffer + IW_EV_LCP_LEN);
+ IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN);
+ wrqu = (union iwreq_data *) (buffer + IW_EV_LCP_PK_LEN);
/* Extra comes logically after that. Offset +12 bytes. */
- extra = buffer + IW_EV_POINT_OFF + IW_EV_POINT_LEN;
+ extra = buffer + IW_EV_POINT_OFF + IW_EV_POINT_PK_LEN;
/* Call the handler */
ret = handler(dev, &info, wrqu, extra);
/* Calculate real returned length */
extra_size = (wrqu->data.length * descr->token_size);
/* Re-adjust reply size */
- request->len = extra_size + IW_EV_POINT_LEN;
+ request->len = extra_size + IW_EV_POINT_PK_LEN;
/* Put the iwe header where it should, i.e. scrap the
* dummy pointer. */
- memcpy(buffer + IW_EV_POINT_OFF, request, IW_EV_LCP_LEN);
+ memcpy(buffer + IW_EV_POINT_OFF, request, IW_EV_LCP_PK_LEN);
#ifdef WE_RTNETLINK_DEBUG
printk(KERN_DEBUG "%s (WE.r) : Reply 0x%04X, hdr_len %d, tokens %d, extra_size %d, buffer_size %d\n", dev->name, cmd, hdr_len, wrqu->data.length, extra_size, buffer_size);
#endif /* WE_RTNETLINK_DEBUG */
/* Extract fixed header from request. This is properly aligned. */
- wrqu = &request->u;
+ wrqu = (union iwreq_data *) (((char *) request) + IW_EV_LCP_PK_LEN);
/* Check if wrqu is complete */
- hdr_len = event_type_size[descr->header_type];
+ hdr_len = event_type_pk_size[descr->header_type];
if(request_len < hdr_len) {
#ifdef WE_RTNETLINK_DEBUG
printk(KERN_DEBUG
/* Put wrqu in the right place (skip pointer) */
memcpy(((char *) &wrqu_point) + IW_EV_POINT_OFF,
- wrqu, IW_EV_POINT_LEN - IW_EV_LCP_LEN);
+ wrqu, IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN);
/* Don't forget about the event code... */
wrqu = &wrqu_point;
hdr_len = extra_size;
extra_size = 0;
} else {
- hdr_len = IW_EV_POINT_LEN;
+ hdr_len = IW_EV_POINT_PK_LEN;
}
/* Check if wrqu is complete */
memcpy(buffer + IW_EV_POINT_OFF, request, request_len);
/* Use our own copy of wrqu */
wrqu = (union iwreq_data *) (buffer + IW_EV_POINT_OFF
- + IW_EV_LCP_LEN);
+ + IW_EV_LCP_PK_LEN);
/* No extra arguments. Trivial to handle */
ret = handler(dev, &info, wrqu, (char *) wrqu);
char * extra;
/* Buffer for full reply */
- buffer_size = extra_size + IW_EV_POINT_LEN + IW_EV_POINT_OFF;
+ buffer_size = extra_size + IW_EV_POINT_PK_LEN + IW_EV_POINT_OFF;
#ifdef WE_RTNETLINK_DEBUG
printk(KERN_DEBUG "%s (WE.r) : Malloc %d bytes (%d bytes)\n",
/* Put wrqu in the right place (just before extra).
* Leave space for IWE header and dummy pointer...
- * Note that IW_EV_LCP_LEN==4 bytes, so it's still aligned...
+ * Note that IW_EV_LCP_PK_LEN==4 bytes, so it's still aligned.
*/
- memcpy(buffer + IW_EV_LCP_LEN + IW_EV_POINT_OFF,
- ((char *) request) + IW_EV_LCP_LEN,
- IW_EV_POINT_LEN - IW_EV_LCP_LEN);
- wrqu = (union iwreq_data *) (buffer + IW_EV_LCP_LEN);
+ memcpy(buffer + IW_EV_LCP_PK_LEN + IW_EV_POINT_OFF,
+ ((char *) request) + IW_EV_LCP_PK_LEN,
+ IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN);
+ wrqu = (union iwreq_data *) (buffer + IW_EV_LCP_PK_LEN);
/* Extra comes logically after that. Offset +12 bytes. */
- extra = buffer + IW_EV_POINT_OFF + IW_EV_POINT_LEN;
+ extra = buffer + IW_EV_POINT_OFF + IW_EV_POINT_PK_LEN;
/* Call the handler */
ret = handler(dev, &info, wrqu, extra);
if (!(descr->get_args & IW_PRIV_SIZE_FIXED))
extra_size = adjust_priv_size(descr->get_args, wrqu);
/* Re-adjust reply size */
- request->len = extra_size + IW_EV_POINT_LEN;
+ request->len = extra_size + IW_EV_POINT_PK_LEN;
/* Put the iwe header where it should, i.e. scrap the
* dummy pointer. */
- memcpy(buffer + IW_EV_POINT_OFF, request, IW_EV_LCP_LEN);
+ memcpy(buffer + IW_EV_POINT_OFF, request, IW_EV_LCP_PK_LEN);
#ifdef WE_RTNETLINK_DEBUG
printk(KERN_DEBUG "%s (WE.r) : Reply 0x%04X, hdr_len %d, tokens %d, extra_size %d, buffer_size %d\n", dev->name, cmd, hdr_len, wrqu->data.length, extra_size, buffer_size);
/* Does it fits in wrqu ? */
if((descr->set_args & IW_PRIV_SIZE_FIXED) &&
(extra_size <= IFNAMSIZ)) {
- hdr_len = IW_EV_LCP_LEN + extra_size;
+ hdr_len = IW_EV_LCP_PK_LEN + extra_size;
extra_size = 0;
} else {
- hdr_len = IW_EV_POINT_LEN;
+ hdr_len = IW_EV_POINT_PK_LEN;
}
/* Extract fixed header from request. This is properly aligned. */
- wrqu = &request->u;
+ wrqu = (union iwreq_data *) (((char *) request) + IW_EV_LCP_PK_LEN);
/* Check if wrqu is complete */
if(request_len < hdr_len) {
/* Put wrqu in the right place (skip pointer) */
memcpy(((char *) &wrqu_point) + IW_EV_POINT_OFF,
- wrqu, IW_EV_POINT_LEN - IW_EV_LCP_LEN);
+ wrqu, IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN);
/* Does it fits within bounds ? */
if(wrqu_point.data.length > (descr->set_args &
iw_handler handler;
/* Check length */
- if(len < IW_EV_LCP_LEN) {
+ if(len < IW_EV_LCP_PK_LEN) {
printk(KERN_DEBUG "%s (WE.r) : RtNetlink request too short (%d)\n",
dev->name, len);
return -EINVAL;
iw_handler handler;
/* Check length */
- if(len < IW_EV_LCP_LEN) {
+ if(len < IW_EV_LCP_PK_LEN) {
printk(KERN_DEBUG "%s (WE.r) : RtNetlink request too short (%d)\n",
dev->name, len);
return -EINVAL;
const enum dccp_pkt_type pkt_type);
extern void dccp_write_xmit(struct sock *sk, int block);
-extern void dccp_write_xmit_timer(unsigned long data);
extern void dccp_write_space(struct sock *sk);
extern void dccp_init_xmit_timers(struct sock *sk);
if (get_user(len, optlen))
return -EFAULT;
- if (len < sizeof(int))
+ if (len < (int)sizeof(int))
return -EINVAL;
dp = dccp_sk(sk);
(__be32 __user *)optval, optlen);
case DCCP_SOCKOPT_SEND_CSCOV:
val = dp->dccps_pcslen;
+ len = sizeof(val);
break;
case DCCP_SOCKOPT_RECV_CSCOV:
val = dp->dccps_pcrlen;
+ len = sizeof(val);
break;
case 128 ... 191:
return ccid_hc_rx_getsockopt(dp->dccps_hc_rx_ccid, sk, optname,
}
/* Transmit-delay timer: used by the CCIDs to delay actual send time */
-void dccp_write_xmit_timer(unsigned long data)
+static void dccp_write_xmit_timer(unsigned long data)
{
struct sock *sk = (struct sock *)data;
struct dccp_sock *dp = dccp_sk(sk);
{
int error;
u8 scope;
-} dn_fib_props[RTA_MAX+1] = {
+} dn_fib_props[RTN_MAX+1] = {
[RTN_UNSPEC] = { .error = 0, .scope = RT_SCOPE_NOWHERE },
[RTN_UNICAST] = { .error = 0, .scope = RT_SCOPE_UNIVERSE },
[RTN_LOCAL] = { .error = 0, .scope = RT_SCOPE_HOST },
struct dn_fib_info *ofi;
int nhs = 1;
+ if (r->rtm_type > RTN_MAX)
+ goto err_inval;
+
if (dn_fib_props[r->rtm_type].scope > r->rtm_scope)
goto err_inval;
static struct nla_policy dn_fib_rule_policy[FRA_MAX+1] __read_mostly = {
FRA_GENERIC_POLICY,
- [FRA_SRC] = { .type = NLA_U16 },
- [FRA_DST] = { .type = NLA_U16 },
};
static int dn_fib_rule_match(struct fib_rule *rule, struct flowi *fl, int flags)
int err = -EINVAL;
struct dn_fib_rule *r = (struct dn_fib_rule *)rule;
- if (frh->src_len > 16 || frh->dst_len > 16 || frh->tos)
+ if (frh->tos)
goto errout;
if (rule->table == RT_TABLE_UNSPEC) {
}
}
- if (tb[FRA_SRC])
+ if (frh->src_len)
r->src = nla_get_le16(tb[FRA_SRC]);
- if (tb[FRA_DST])
+ if (frh->dst_len)
r->dst = nla_get_le16(tb[FRA_DST]);
r->src_len = frh->src_len;
if (frh->dst_len && (r->dst_len != frh->dst_len))
return 0;
- if (tb[FRA_SRC] && (r->src != nla_get_le16(tb[FRA_SRC])))
+ if (frh->src_len && (r->src != nla_get_le16(tb[FRA_SRC])))
return 0;
- if (tb[FRA_DST] && (r->dst != nla_get_le16(tb[FRA_DST])))
+ if (frh->dst_len && (r->dst != nla_get_le16(tb[FRA_DST])))
return 0;
return 1;
static struct fib_rules_ops dn_fib_rules_ops = {
.family = AF_DECnet,
.rule_size = sizeof(struct dn_fib_rule),
+ .addr_size = sizeof(u16),
.action = dn_fib_rule_action,
.match = dn_fib_rule_match,
.configure = dn_fib_rule_configure,
Include software based cipher suites in support of IEEE
802.11's WEP. This is needed for WEP as well as 802.1x.
- This can be compiled as a modules and it will be called
+ This can be compiled as a module and it will be called
"ieee80211_crypt_wep".
config IEEE80211_CRYPT_CCMP
(aka TGi, WPA, WPA2, WPA-PSK, etc.) for use with CCMP enabled
networks.
- This can be compiled as a modules and it will be called
+ This can be compiled as a module and it will be called
"ieee80211_crypt_ccmp".
config IEEE80211_CRYPT_TKIP
(aka TGi, WPA, WPA2, WPA-PSK, etc.) for use with TKIP enabled
networks.
- This can be compiled as a modules and it will be called
+ This can be compiled as a module and it will be called
"ieee80211_crypt_tkip".
source "net/ieee80211/softmac/Kconfig"
&cipso_ptr[6],
secattr);
break;
+ case CIPSO_V4_TAG_RANGE:
+ ret_val = cipso_v4_parsetag_rng(doi_def,
+ &cipso_ptr[6],
+ secattr);
+ break;
}
skbuff_getattr_return:
cfg->fc_nlinfo.pid = NETLINK_CB(skb).pid;
cfg->fc_nlinfo.nlh = nlh;
+ if (cfg->fc_type > RTN_MAX) {
+ err = -EINVAL;
+ goto errout;
+ }
+
nlmsg_for_each_attr(attr, nlh, sizeof(struct rtmsg), remaining) {
switch (attr->nla_type) {
case RTA_DST:
.nl_u = { .ip4_u = { .daddr = frn->fl_addr,
.tos = frn->fl_tos,
.scope = frn->fl_scope } } };
+
+ frn->err = -ENOENT;
if (tb) {
local_bh_disable();
frn->nh_sel = res.nh_sel;
frn->type = res.type;
frn->scope = res.scope;
+ fib_res_put(&res);
}
local_bh_enable();
}
struct fib_table *tb;
skb = skb_dequeue(&sk->sk_receive_queue);
+ if (skb == NULL)
+ return;
+
nlh = (struct nlmsghdr *)skb->data;
if (skb->len < NLMSG_SPACE(0) || skb->len < nlh->nlmsg_len ||
nlh->nlmsg_len < NLMSG_LENGTH(sizeof(*frn))) {
nl_fib_lookup(frn, tb);
- pid = nlh->nlmsg_pid; /*pid of sending process */
+ pid = NETLINK_CB(skb).pid; /* pid of sending process */
NETLINK_CB(skb).pid = 0; /* from kernel */
NETLINK_CB(skb).dst_group = 0; /* unicast */
netlink_unicast(sk, skb, pid, MSG_DONTWAIT);
static struct nla_policy fib4_rule_policy[FRA_MAX+1] __read_mostly = {
FRA_GENERIC_POLICY,
- [FRA_SRC] = { .type = NLA_U32 },
- [FRA_DST] = { .type = NLA_U32 },
[FRA_FLOW] = { .type = NLA_U32 },
};
int err = -EINVAL;
struct fib4_rule *rule4 = (struct fib4_rule *) rule;
- if (frh->src_len > 32 || frh->dst_len > 32 ||
- (frh->tos & ~IPTOS_TOS_MASK))
+ if (frh->tos & ~IPTOS_TOS_MASK)
goto errout;
if (rule->table == RT_TABLE_UNSPEC) {
}
}
- if (tb[FRA_SRC])
+ if (frh->src_len)
rule4->src = nla_get_be32(tb[FRA_SRC]);
- if (tb[FRA_DST])
+ if (frh->dst_len)
rule4->dst = nla_get_be32(tb[FRA_DST]);
#ifdef CONFIG_NET_CLS_ROUTE
return 0;
#endif
- if (tb[FRA_SRC] && (rule4->src != nla_get_be32(tb[FRA_SRC])))
+ if (frh->src_len && (rule4->src != nla_get_be32(tb[FRA_SRC])))
return 0;
- if (tb[FRA_DST] && (rule4->dst != nla_get_be32(tb[FRA_DST])))
+ if (frh->dst_len && (rule4->dst != nla_get_be32(tb[FRA_DST])))
return 0;
return 1;
static struct fib_rules_ops fib4_rules_ops = {
.family = AF_INET,
.rule_size = sizeof(struct fib4_rule),
+ .addr_size = sizeof(u32),
.action = fib4_rule_action,
.match = fib4_rule_match,
.configure = fib4_rule_configure,
{
int error;
u8 scope;
-} fib_props[RTA_MAX + 1] = {
+} fib_props[RTN_MAX + 1] = {
{
.error = 0,
.scope = RT_SCOPE_NOWHERE,
return fa_head;
}
+/*
+ * Caller must hold RTNL.
+ */
static int fn_trie_insert(struct fib_table *tb, struct fib_config *cfg)
{
struct trie *t = (struct trie *) tb->tb_data;
t->revision++;
t->size--;
- preempt_disable();
tp = NODE_PARENT(n);
tnode_free((struct tnode *) n);
rcu_assign_pointer(t->trie, trie_rebalance(t, tp));
} else
rcu_assign_pointer(t->trie, NULL);
- preempt_enable();
return 1;
}
+/*
+ * Caller must hold RTNL.
+ */
static int fn_trie_delete(struct fib_table *tb, struct fib_config *cfg)
{
struct trie *t = (struct trie *) tb->tb_data;
return NULL; /* Ready. Root of trie */
}
+/*
+ * Caller must hold RTNL.
+ */
static int fn_trie_flush(struct fib_table *tb)
{
struct trie *t = (struct trie *) tb->tb_data;
*/
void ip_mc_rejoin_group(struct ip_mc_list *im)
{
+#ifdef CONFIG_IP_MULTICAST
struct in_device *in_dev = im->interface;
-#ifdef CONFIG_IP_MULTICAST
if (im->multiaddr == IGMP_ALL_HOSTS)
return;
return 0;
}
- for (i = 0, ret = 0; i < IFNAMSIZ/sizeof(unsigned long); i++) {
- unsigned long odev;
- memcpy(&odev, outdev + i*sizeof(unsigned long),
- sizeof(unsigned long));
- ret |= (odev
- ^ ((const unsigned long *)arpinfo->outiface)[i])
- & ((const unsigned long *)arpinfo->outiface_mask)[i];
+ for (i = 0, ret = 0; i < IFNAMSIZ; i++) {
+ ret |= (outdev[i] ^ arpinfo->outiface[i])
+ & arpinfo->outiface_mask[i];
}
if (FWINV(ret != 0, ARPT_INV_VIA_OUT)) {
enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo);
if (ct->tuplehash[dir].tuple.dst.ip !=
- ct->tuplehash[!dir].tuple.src.ip
-#ifdef CONFIG_XFRM
- || ct->tuplehash[dir].tuple.dst.u.all !=
- ct->tuplehash[!dir].tuple.src.u.all
-#endif
- )
+ ct->tuplehash[!dir].tuple.src.ip) {
if (ip_route_me_harder(pskb, RTN_UNSPEC))
ret = NF_DROP;
+ }
+#ifdef CONFIG_XFRM
+ else if (ct->tuplehash[dir].tuple.dst.u.all !=
+ ct->tuplehash[!dir].tuple.src.u.all)
+ if (ip_xfrm_me_harder(pskb))
+ ret = NF_DROP;
+#endif
+
}
return ret;
}
"has invalid config pointer!\n");
return 0;
}
- clusterip_config_entry_get(cipinfo->config);
} else {
/* Case B: This is a new rule referring to an existing
* clusterip config. */
cipinfo->config = config;
- clusterip_config_entry_get(cipinfo->config);
}
} else {
/* Case C: This is a completely new clusterip config */
#include <linux/netfilter_ipv4/ipt_ULOG.h>
#include <net/sock.h>
#include <linux/bitops.h>
+#include <asm/unaligned.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>");
/* copy hook, prefix, timestamp, payload, etc. */
pm->data_len = copy_len;
- pm->timestamp_sec = skb->tstamp.off_sec;
- pm->timestamp_usec = skb->tstamp.off_usec;
- pm->mark = skb->mark;
+ put_unaligned(skb->tstamp.off_sec, &pm->timestamp_sec);
+ put_unaligned(skb->tstamp.off_usec, &pm->timestamp_usec);
+ put_unaligned(skb->mark, &pm->mark);
pm->hook = hooknum;
if (prefix != NULL)
strncpy(pm->prefix, prefix, sizeof(pm->prefix));
enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo);
if (ct->tuplehash[dir].tuple.dst.u3.ip !=
- ct->tuplehash[!dir].tuple.src.u3.ip
-#ifdef CONFIG_XFRM
- || ct->tuplehash[dir].tuple.dst.u.all !=
- ct->tuplehash[!dir].tuple.src.u.all
-#endif
- )
+ ct->tuplehash[!dir].tuple.src.u3.ip) {
if (ip_route_me_harder(pskb, RTN_UNSPEC))
ret = NF_DROP;
+ }
+#ifdef CONFIG_XFRM
+ else if (ct->tuplehash[dir].tuple.dst.u.all !=
+ ct->tuplehash[!dir].tuple.src.u.all)
+ if (ip_xfrm_me_harder(pskb))
+ ret = NF_DROP;
+#endif
}
return ret;
}
sysctl_max_syn_backlog = 128;
}
- /* Allow no more than 3/4 kernel memory (usually less) allocated to TCP */
- sysctl_tcp_mem[0] = (1536 / sizeof (struct inet_bind_hashbucket)) << order;
- sysctl_tcp_mem[1] = sysctl_tcp_mem[0] * 4 / 3;
+ /* Set the pressure threshold to be a fraction of global memory that
+ * is up to 1/2 at 256 MB, decreasing toward zero with the amount of
+ * memory, with a floor of 128 pages.
+ */
+ limit = min(nr_all_pages, 1UL<<(28-PAGE_SHIFT)) >> (20-PAGE_SHIFT);
+ limit = (limit * (nr_all_pages >> (20-PAGE_SHIFT))) >> (PAGE_SHIFT-11);
+ limit = max(limit, 128UL);
+ sysctl_tcp_mem[0] = limit / 4 * 3;
+ sysctl_tcp_mem[1] = limit;
sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 2;
+ /* Set per-socket limits to no more than 1/128 the pressure threshold */
limit = ((unsigned long)sysctl_tcp_mem[1]) << (PAGE_SHIFT - 7);
max_share = min(4UL*1024*1024, limit);
struct inet_connection_sock *icsk = inet_csk(sk);
struct tcp_congestion_ops *ca;
- if (icsk->icsk_ca_ops != &tcp_init_congestion_ops)
- return;
+ /* if no choice made yet assign the current value set as default */
+ if (icsk->icsk_ca_ops == &tcp_init_congestion_ops) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(ca, &tcp_cong_list, list) {
+ if (try_module_get(ca->owner)) {
+ icsk->icsk_ca_ops = ca;
+ break;
+ }
- rcu_read_lock();
- list_for_each_entry_rcu(ca, &tcp_cong_list, list) {
- if (try_module_get(ca->owner)) {
- icsk->icsk_ca_ops = ca;
- break;
+ /* fallback to next available */
}
-
+ rcu_read_unlock();
}
- rcu_read_unlock();
if (icsk->icsk_ca_ops->init)
icsk->icsk_ca_ops->init(sk);
rcu_read_lock();
ca = tcp_ca_find(name);
+
/* no change asking for existing value */
if (ca == icsk->icsk_ca_ops)
goto out;
else {
tcp_cleanup_congestion_control(sk);
icsk->icsk_ca_ops = ca;
- if (icsk->icsk_ca_ops->init)
+
+ if (sk->sk_state != TCP_CLOSE && icsk->icsk_ca_ops->init)
icsk->icsk_ca_ops->init(sk);
}
out:
if (tp->packets_out > tp->snd_cwnd_used)
tp->snd_cwnd_used = tp->packets_out;
- if ((s32)(tcp_time_stamp - tp->snd_cwnd_stamp) >= inet_csk(sk)->icsk_rto)
+ if (sysctl_tcp_slow_start_after_idle &&
+ (s32)(tcp_time_stamp - tp->snd_cwnd_stamp) >= inet_csk(sk)->icsk_rto)
tcp_cwnd_application_limited(sk);
}
}
*/
if (window <= free_space - mss || window > free_space)
window = (free_space/mss)*mss;
+ else if (mss == full_space &&
+ free_space > window + full_space/2)
+ window = free_space;
}
return window;
skb->nh.raw = skb_push(skb, x->props.header_len + hdrlen);
top_iph = skb->nh.iph;
- hdrlen = iph->ihl * 4 - optlen;
- skb->h.raw += hdrlen;
+ skb->h.raw += sizeof(*iph) - hdrlen;
- memmove(top_iph, iph, hdrlen);
+ memmove(top_iph, iph, sizeof(*iph));
if (unlikely(optlen)) {
struct ip_beet_phdr *ph;
ph = (struct ip_beet_phdr *)skb->h.raw;
ph->padlen = 4 - (optlen & 4);
- ph->hdrlen = (optlen + ph->padlen + sizeof(*ph)) / 8;
+ ph->hdrlen = optlen / 8;
ph->nexthdr = top_iph->protocol;
+ if (ph->padlen)
+ memset(ph + 1, IPOPT_NOP, ph->padlen);
top_iph->protocol = IPPROTO_BEETPH;
top_iph->ihl = sizeof(struct iphdr) / 4;
protocol = iph->protocol;
if (unlikely(iph->protocol == IPPROTO_BEETPH)) {
- struct ip_beet_phdr *ph = (struct ip_beet_phdr*)(iph + 1);
+ struct ip_beet_phdr *ph;
if (!pskb_may_pull(skb, sizeof(*ph)))
goto out;
+ ph = (struct ip_beet_phdr *)(skb->h.ipiph + 1);
- phlen = ph->hdrlen * 8;
- optlen = phlen - ph->padlen - sizeof(*ph);
+ phlen = sizeof(*ph) + ph->padlen;
+ optlen = ph->hdrlen * 8 + (IPV4_BEET_PHMAXLEN - phlen);
if (optlen < 0 || optlen & 3 || optlen > 250)
goto out;
- if (!pskb_may_pull(skb, phlen))
+ if (!pskb_may_pull(skb, phlen + optlen))
goto out;
+ skb->len -= phlen + optlen;
ph_nexthdr = ph->nexthdr;
}
- skb_push(skb, sizeof(*iph) - phlen + optlen);
- memmove(skb->data, skb->nh.raw, sizeof(*iph));
- skb->nh.raw = skb->data;
+ skb->nh.raw = skb->data + (phlen - sizeof(*iph));
+ memmove(skb->nh.raw, iph, sizeof(*iph));
+ skb->h.raw = skb->data + (phlen + optlen);
+ skb->data = skb->h.raw;
iph = skb->nh.iph;
iph->ihl = (sizeof(*iph) + optlen) / 4;
- iph->tot_len = htons(skb->len);
+ iph->tot_len = htons(skb->len + iph->ihl * 4);
iph->daddr = x->sel.daddr.a4;
iph->saddr = x->sel.saddr.a4;
if (ph_nexthdr)
#endif
#endif
.proxy_ndp = 0,
+ .accept_source_route = 0, /* we do not accept RH0 by default. */
};
static struct ipv6_devconf ipv6_devconf_dflt __read_mostly = {
#endif
#endif
.proxy_ndp = 0,
+ .accept_source_route = 0, /* we do not accept RH0 by default. */
};
/* IPv6 Wildcard Address and Loopback Address defined by RFC2553 */
}
#endif
+ if (netif_running(dev) && netif_carrier_ok(dev))
+ ndev->if_flags |= IF_READY;
+
ipv6_mc_init_dev(ndev);
ndev->tstamp = jiffies;
#ifdef CONFIG_SYSCTL
#define IPV6_SADDR_SCORE_LABEL 0x0020
#define IPV6_SADDR_SCORE_PRIVACY 0x0040
-static int inline ipv6_saddr_preferred(int type)
+static inline int ipv6_saddr_preferred(int type)
{
if (type & (IPV6_ADDR_MAPPED|IPV6_ADDR_COMPATv4|
IPV6_ADDR_LOOPBACK|IPV6_ADDR_RESERVED))
}
/* static matching label */
-static int inline ipv6_saddr_label(const struct in6_addr *addr, int type)
+static inline int ipv6_saddr_label(const struct in6_addr *addr, int type)
{
/*
* prefix (longest match) label
rtnl_set_sk_err(RTNLGRP_IPV6_IFADDR, err);
}
-static void inline ipv6_store_devconf(struct ipv6_devconf *cnf,
+static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
__s32 *array, int bytes)
{
BUG_ON(bytes < (DEVCONF_MAX * 4));
#endif
#endif
array[DEVCONF_PROXY_NDP] = cnf->proxy_ndp;
+ array[DEVCONF_ACCEPT_SOURCE_ROUTE] = cnf->accept_source_route;
}
static inline size_t inet6_if_nlmsg_size(void)
.mode = 0644,
.proc_handler = &proc_dointvec,
},
+ {
+ .ctl_name = NET_IPV6_ACCEPT_SOURCE_ROUTE,
+ .procname = "accept_source_route",
+ .data = &ipv6_devconf.accept_source_route,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
{
.ctl_name = 0, /* sentinel */
}
struct inet6_skb_parm *opt = IP6CB(skb);
struct in6_addr *addr = NULL;
struct in6_addr daddr;
+ struct inet6_dev *idev;
int n, i;
-
struct ipv6_rt_hdr *hdr;
struct rt0_hdr *rthdr;
+ int accept_source_route = ipv6_devconf.accept_source_route;
+
+ if (accept_source_route < 0 ||
+ ((idev = in6_dev_get(skb->dev)) == NULL)) {
+ kfree_skb(skb);
+ return -1;
+ }
+ if (idev->cnf.accept_source_route < 0) {
+ in6_dev_put(idev);
+ kfree_skb(skb);
+ return -1;
+ }
+
+ if (accept_source_route > idev->cnf.accept_source_route)
+ accept_source_route = idev->cnf.accept_source_route;
+
+ in6_dev_put(idev);
if (!pskb_may_pull(skb, (skb->h.raw-skb->data)+8) ||
!pskb_may_pull(skb, (skb->h.raw-skb->data)+((skb->h.raw[1]+1)<<3))) {
hdr = (struct ipv6_rt_hdr *) skb->h.raw;
+ switch (hdr->type) {
+#ifdef CONFIG_IPV6_MIP6
+ break;
+#endif
+ case IPV6_SRCRT_TYPE_0:
+ if (accept_source_route > 0)
+ break;
+ kfree_skb(skb);
+ return -1;
+ default:
+ IP6_INC_STATS_BH(ip6_dst_idev(skb->dst),
+ IPSTATS_MIB_INHDRERRORS);
+ icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, (&hdr->type) - skb->nh.raw);
+ return -1;
+ }
+
if (ipv6_addr_is_multicast(&skb->nh.ipv6h->daddr) ||
skb->pkt_type != PACKET_HOST) {
IP6_INC_STATS_BH(ip6_dst_idev(skb->dst),
}
break;
#endif
- default:
- IP6_INC_STATS_BH(ip6_dst_idev(skb->dst),
- IPSTATS_MIB_INHDRERRORS);
- icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, (&hdr->type) - skb->nh.raw);
- return -1;
}
/*
static struct nla_policy fib6_rule_policy[FRA_MAX+1] __read_mostly = {
FRA_GENERIC_POLICY,
- [FRA_SRC] = { .len = sizeof(struct in6_addr) },
- [FRA_DST] = { .len = sizeof(struct in6_addr) },
};
static int fib6_rule_configure(struct fib_rule *rule, struct sk_buff *skb,
int err = -EINVAL;
struct fib6_rule *rule6 = (struct fib6_rule *) rule;
- if (frh->src_len > 128 || frh->dst_len > 128)
- goto errout;
-
if (rule->action == FR_ACT_TO_TBL) {
if (rule->table == RT6_TABLE_UNSPEC)
goto errout;
}
}
- if (tb[FRA_SRC])
+ if (frh->src_len)
nla_memcpy(&rule6->src.addr, tb[FRA_SRC],
sizeof(struct in6_addr));
- if (tb[FRA_DST])
+ if (frh->dst_len)
nla_memcpy(&rule6->dst.addr, tb[FRA_DST],
sizeof(struct in6_addr));
if (frh->tos && (rule6->tclass != frh->tos))
return 0;
- if (tb[FRA_SRC] &&
+ if (frh->src_len &&
nla_memcmp(tb[FRA_SRC], &rule6->src.addr, sizeof(struct in6_addr)))
return 0;
- if (tb[FRA_DST] &&
+ if (frh->dst_len &&
nla_memcmp(tb[FRA_DST], &rule6->dst.addr, sizeof(struct in6_addr)))
return 0;
static struct fib_rules_ops fib6_rules_ops = {
.family = AF_INET6,
.rule_size = sizeof(struct fib6_rule),
+ .addr_size = sizeof(struct in6_addr),
.action = fib6_rule_action,
.match = fib6_rule_match,
.configure = fib6_rule_configure,
ins = &iter->u.dst.rt6_next;
}
+ /* Reset round-robin state, if necessary */
+ if (ins == &fn->leaf)
+ fn->rr_ptr = NULL;
+
/*
* insert node
*/
rt6_stats.fib_rt_entries--;
rt6_stats.fib_discarded_routes++;
+ /* Reset round-robin state, if necessary */
+ if (fn->rr_ptr == rt)
+ fn->rr_ptr = NULL;
+
/* Adjust walkers */
read_lock(&fib6_walker_lock);
FOR_WALKERS(w) {
/* pkt_len may be zero if Jumbo payload option is present */
if (pkt_len || hdr->nexthdr != NEXTHDR_HOP) {
- if (pkt_len + sizeof(struct ipv6hdr) > skb->len)
- goto truncated;
+ if (pkt_len + sizeof(struct ipv6hdr) > skb->len) {
+ IP6_INC_STATS_BH(idev, IPSTATS_MIB_INTRUNCATEDPKTS);
+ goto drop;
+ }
if (pskb_trim_rcsum(skb, pkt_len + sizeof(struct ipv6hdr))) {
IP6_INC_STATS_BH(idev, IPSTATS_MIB_INHDRERRORS);
goto drop;
rcu_read_unlock();
return NF_HOOK(PF_INET6,NF_IP6_PRE_ROUTING, skb, dev, NULL, ip6_rcv_finish);
-truncated:
- IP6_INC_STATS_BH(idev, IPSTATS_MIB_INTRUNCATEDPKTS);
err:
IP6_INC_STATS_BH(idev, IPSTATS_MIB_INHDRERRORS);
drop:
int err;
/* Rough check on arithmetic overflow,
- better check is made in ip6_build_xmit
+ better check is made in ip6_append_data().
*/
- if (len < 0)
+ if (len > INT_MAX)
return -EMSGSIZE;
/* Mirror BSD error message compatibility */
/*
* Default Router Selection (RFC 2461 6.3.6)
*/
-static int inline rt6_check_dev(struct rt6_info *rt, int oif)
+static inline int rt6_check_dev(struct rt6_info *rt, int oif)
{
struct net_device *dev = rt->rt6i_dev;
- int ret = 0;
-
- if (!oif)
- return 2;
- if (dev->flags & IFF_LOOPBACK) {
- if (!WARN_ON(rt->rt6i_idev == NULL) &&
- rt->rt6i_idev->dev->ifindex == oif)
- ret = 1;
- else
- return 0;
- }
- if (dev->ifindex == oif)
+ if (!oif || dev->ifindex == oif)
return 2;
-
- return ret;
+ if ((dev->flags & IFF_LOOPBACK) &&
+ rt->rt6i_idev && rt->rt6i_idev->dev->ifindex == oif)
+ return 1;
+ return 0;
}
-static int inline rt6_check_neigh(struct rt6_info *rt)
+static inline int rt6_check_neigh(struct rt6_info *rt)
{
struct neighbour *neigh = rt->rt6i_nexthop;
int m = 0;
return m;
}
-static struct rt6_info *rt6_select(struct rt6_info **head, int oif,
- int strict)
+static struct rt6_info *find_match(struct rt6_info *rt, int oif, int strict,
+ int *mpri, struct rt6_info *match)
{
- struct rt6_info *match = NULL, *last = NULL;
- struct rt6_info *rt, *rt0 = *head;
- u32 metric;
+ int m;
+
+ if (rt6_check_expired(rt))
+ goto out;
+
+ m = rt6_score_route(rt, oif, strict);
+ if (m < 0)
+ goto out;
+
+ if (m > *mpri) {
+ if (strict & RT6_LOOKUP_F_REACHABLE)
+ rt6_probe(match);
+ *mpri = m;
+ match = rt;
+ } else if (strict & RT6_LOOKUP_F_REACHABLE) {
+ rt6_probe(rt);
+ }
+
+out:
+ return match;
+}
+
+static struct rt6_info *find_rr_leaf(struct fib6_node *fn,
+ struct rt6_info *rr_head,
+ u32 metric, int oif, int strict)
+{
+ struct rt6_info *rt, *match;
int mpri = -1;
- RT6_TRACE("%s(head=%p(*head=%p), oif=%d)\n",
- __FUNCTION__, head, head ? *head : NULL, oif);
+ match = NULL;
+ for (rt = rr_head; rt && rt->rt6i_metric == metric;
+ rt = rt->u.dst.rt6_next)
+ match = find_match(rt, oif, strict, &mpri, match);
+ for (rt = fn->leaf; rt && rt != rr_head && rt->rt6i_metric == metric;
+ rt = rt->u.dst.rt6_next)
+ match = find_match(rt, oif, strict, &mpri, match);
- for (rt = rt0, metric = rt0->rt6i_metric;
- rt && rt->rt6i_metric == metric && (!last || rt != rt0);
- rt = rt->u.dst.rt6_next) {
- int m;
+ return match;
+}
- if (rt6_check_expired(rt))
- continue;
+static struct rt6_info *rt6_select(struct fib6_node *fn, int oif, int strict)
+{
+ struct rt6_info *match, *rt0;
- last = rt;
+ RT6_TRACE("%s(fn->leaf=%p, oif=%d)\n",
+ __FUNCTION__, fn->leaf, oif);
- m = rt6_score_route(rt, oif, strict);
- if (m < 0)
- continue;
+ rt0 = fn->rr_ptr;
+ if (!rt0)
+ fn->rr_ptr = rt0 = fn->leaf;
- if (m > mpri) {
- if (strict & RT6_LOOKUP_F_REACHABLE)
- rt6_probe(match);
- match = rt;
- mpri = m;
- } else if (strict & RT6_LOOKUP_F_REACHABLE) {
- rt6_probe(rt);
- }
- }
+ match = find_rr_leaf(fn, rt0, rt0->rt6i_metric, oif, strict);
if (!match &&
- (strict & RT6_LOOKUP_F_REACHABLE) &&
- last && last != rt0) {
+ (strict & RT6_LOOKUP_F_REACHABLE)) {
+ struct rt6_info *next = rt0->u.dst.rt6_next;
+
/* no entries matched; do round-robin */
- static DEFINE_SPINLOCK(lock);
- spin_lock(&lock);
- *head = rt0->u.dst.rt6_next;
- rt0->u.dst.rt6_next = last->u.dst.rt6_next;
- last->u.dst.rt6_next = rt0;
- spin_unlock(&lock);
+ if (!next || next->rt6i_metric != rt0->rt6i_metric)
+ next = fn->leaf;
+
+ if (next != rt0)
+ fn->rr_ptr = next;
}
- RT6_TRACE("%s() => %p, score=%d\n",
- __FUNCTION__, match, mpri);
+ RT6_TRACE("%s() => %p\n",
+ __FUNCTION__, match);
return (match ? match : &ip6_null_entry);
}
fn = fib6_lookup(&table->tb6_root, &fl->fl6_dst, &fl->fl6_src);
restart:
- rt = rt6_select(&fn->leaf, fl->iif, strict | reachable);
+ rt = rt6_select(fn, fl->iif, strict | reachable);
BACKTRACK(&fl->fl6_src);
if (rt == &ip6_null_entry ||
rt->rt6i_flags & RTF_CACHE)
fn = fib6_lookup(&table->tb6_root, &fl->fl6_dst, &fl->fl6_src);
restart:
- rt = rt6_select(&fn->leaf, fl->oif, strict | reachable);
+ rt = rt6_select(fn, fl->oif, strict | reachable);
BACKTRACK(&fl->fl6_src);
if (rt == &ip6_null_entry ||
rt->rt6i_flags & RTF_CACHE)
* Drop the packet on the floor
*/
-static inline int ip6_pkt_drop(struct sk_buff *skb, int code)
+static inline int ip6_pkt_drop(struct sk_buff *skb, int code,
+ int ipstats_mib_noroutes)
{
- int type = ipv6_addr_type(&skb->nh.ipv6h->daddr);
- if (type == IPV6_ADDR_ANY || type == IPV6_ADDR_RESERVED)
- IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_INADDRERRORS);
-
- IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_OUTNOROUTES);
+ int type;
+ switch (ipstats_mib_noroutes) {
+ case IPSTATS_MIB_INNOROUTES:
+ type = ipv6_addr_type(&skb->nh.ipv6h->daddr);
+ if (type == IPV6_ADDR_ANY || type == IPV6_ADDR_RESERVED) {
+ IP6_INC_STATS(ip6_dst_idev(skb->dst), IPSTATS_MIB_INADDRERRORS);
+ break;
+ }
+ /* FALLTHROUGH */
+ case IPSTATS_MIB_OUTNOROUTES:
+ IP6_INC_STATS(ip6_dst_idev(skb->dst), ipstats_mib_noroutes);
+ break;
+ }
icmpv6_send(skb, ICMPV6_DEST_UNREACH, code, 0, skb->dev);
kfree_skb(skb);
return 0;
static int ip6_pkt_discard(struct sk_buff *skb)
{
- return ip6_pkt_drop(skb, ICMPV6_NOROUTE);
+ return ip6_pkt_drop(skb, ICMPV6_NOROUTE, IPSTATS_MIB_INNOROUTES);
}
static int ip6_pkt_discard_out(struct sk_buff *skb)
{
skb->dev = skb->dst->dev;
- return ip6_pkt_discard(skb);
+ return ip6_pkt_drop(skb, ICMPV6_NOROUTE, IPSTATS_MIB_OUTNOROUTES);
}
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
static int ip6_pkt_prohibit(struct sk_buff *skb)
{
- return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED);
+ return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_INNOROUTES);
}
static int ip6_pkt_prohibit_out(struct sk_buff *skb)
{
skb->dev = skb->dst->dev;
- return ip6_pkt_prohibit(skb);
+ return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_OUTNOROUTES);
}
static int ip6_pkt_blk_hole(struct sk_buff *skb)
First: no IPv4 options.
*/
newinet->opt = NULL;
+ newnp->ipv6_fl_list = NULL;
/* Clone RX bits */
newnp->rxopt.all = np->rxopt.all;
return udp_sendmsg(iocb, sk, msg, len);
/* Rough check on arithmetic overflow,
- better check is made in ip6_build_xmit
+ better check is made in ip6_append_data().
*/
if (len > INT_MAX - sizeof(struct udphdr))
return -EMSGSIZE;
static struct hlist_head xfrm6_tunnel_spi_byaddr[XFRM6_TUNNEL_SPI_BYADDR_HSIZE];
static struct hlist_head xfrm6_tunnel_spi_byspi[XFRM6_TUNNEL_SPI_BYSPI_HSIZE];
-static unsigned inline xfrm6_tunnel_spi_hash_byaddr(xfrm_address_t *addr)
+static inline unsigned xfrm6_tunnel_spi_hash_byaddr(xfrm_address_t *addr)
{
unsigned h;
return h;
}
-static unsigned inline xfrm6_tunnel_spi_hash_byspi(u32 spi)
+static inline unsigned xfrm6_tunnel_spi_hash_byspi(u32 spi)
{
return spi % XFRM6_TUNNEL_SPI_BYSPI_HSIZE;
}
sk->sk_shutdown |= SEND_SHUTDOWN;
sk->sk_state_change(sk);
- sock_orphan(sk);
release_sock(sk);
/* Close our TSAP.
*/
ret = sock_error(sk);
if (ret)
- break;
+ ;
else if (sk->sk_shutdown & RCV_SHUTDOWN)
;
else if (noblock)
u32 raccm; /* to please pppd - dummy) */
unsigned int flags; /* PPP flags (compression, ...) */
unsigned int rbits; /* Unused receive flags ??? */
-
+ struct work_struct disconnect_work; /* Process context disconnection */
/* ------------------------ IrTTP part ------------------------ */
/* We create a pseudo "socket" over the IrDA tranport */
unsigned long ttp_open; /* Set when IrTTP is ready */
#include "irnet_irda.h" /* Private header */
+/*
+ * PPP disconnect work: we need to make sure we're in
+ * process context when calling ppp_unregister_channel().
+ */
+static void irnet_ppp_disconnect(struct work_struct *work)
+{
+ irnet_socket * self =
+ container_of(work, irnet_socket, disconnect_work);
+
+ if (self == NULL)
+ return;
+ /*
+ * If we were connected, cleanup & close the PPP
+ * channel, which will kill pppd (hangup) and the rest.
+ */
+ if (self->ppp_open && !self->ttp_open && !self->ttp_connect) {
+ ppp_unregister_channel(&self->chan);
+ self->ppp_open = 0;
+ }
+}
+
/************************* CONTROL CHANNEL *************************/
/*
* When ppp is not active, /dev/irnet act as a control channel.
#endif /* DISCOVERY_NOMASK */
self->tx_flow = FLOW_START; /* Flow control from IrTTP */
+ INIT_WORK(&self->disconnect_work, irnet_ppp_disconnect);
+
DEXIT(IRDA_SOCK_TRACE, "\n");
return(0);
}
{
if(test_open)
{
-#ifdef MISSING_PPP_API
- /* ppp_unregister_channel() wants a user context, which we
- * are guaranteed to NOT have here. What are we supposed
- * to do here ? Jean II */
- /* If we were connected, cleanup & close the PPP channel,
- * which will kill pppd (hangup) and the rest */
- ppp_unregister_channel(&self->chan);
- self->ppp_open = 0;
-#endif
+ /* ppp_unregister_channel() wants a user context. */
+ schedule_work(&self->disconnect_work);
}
else
{
/* Not everything should be copied */
new->notify.instance = instance;
+ spin_lock_init(&new->lock);
init_timer(&new->todo_timer);
skb_queue_head_init(&new->rx_queue);
/* NOTREACHED */
}
+static inline int pfkey_mode_from_xfrm(int mode)
+{
+ switch(mode) {
+ case XFRM_MODE_TRANSPORT:
+ return IPSEC_MODE_TRANSPORT;
+ case XFRM_MODE_TUNNEL:
+ return IPSEC_MODE_TUNNEL;
+ case XFRM_MODE_BEET:
+ return IPSEC_MODE_BEET;
+ default:
+ return -1;
+ }
+}
+
+static inline int pfkey_mode_to_xfrm(int mode)
+{
+ switch(mode) {
+ case IPSEC_MODE_ANY: /*XXX*/
+ case IPSEC_MODE_TRANSPORT:
+ return XFRM_MODE_TRANSPORT;
+ case IPSEC_MODE_TUNNEL:
+ return XFRM_MODE_TUNNEL;
+ case IPSEC_MODE_BEET:
+ return XFRM_MODE_BEET;
+ default:
+ return -1;
+ }
+}
+
static struct sk_buff * pfkey_xfrm_state2msg(struct xfrm_state *x, int add_keys, int hsc)
{
struct sk_buff *skb;
int encrypt_key_size = 0;
int sockaddr_size;
struct xfrm_encap_tmpl *natt = NULL;
+ int mode;
/* address family check */
sockaddr_size = pfkey_sockaddr_size(x->props.family);
sa2 = (struct sadb_x_sa2 *) skb_put(skb, sizeof(struct sadb_x_sa2));
sa2->sadb_x_sa2_len = sizeof(struct sadb_x_sa2)/sizeof(uint64_t);
sa2->sadb_x_sa2_exttype = SADB_X_EXT_SA2;
- sa2->sadb_x_sa2_mode = x->props.mode + 1;
+ if ((mode = pfkey_mode_from_xfrm(x->props.mode)) < 0) {
+ kfree_skb(skb);
+ return ERR_PTR(-EINVAL);
+ }
+ sa2->sadb_x_sa2_mode = mode;
sa2->sadb_x_sa2_reserved1 = 0;
sa2->sadb_x_sa2_reserved2 = 0;
sa2->sadb_x_sa2_sequence = 0;
if (ext_hdrs[SADB_X_EXT_SA2-1]) {
struct sadb_x_sa2 *sa2 = (void*)ext_hdrs[SADB_X_EXT_SA2-1];
- x->props.mode = sa2->sadb_x_sa2_mode;
- if (x->props.mode)
- x->props.mode--;
+ int mode = pfkey_mode_to_xfrm(sa2->sadb_x_sa2_mode);
+ if (mode < 0) {
+ err = -EINVAL;
+ goto out;
+ }
+ x->props.mode = mode;
x->props.reqid = sa2->sadb_x_sa2_reqid;
}
struct sadb_address *saddr, *daddr;
struct sadb_msg *out_hdr;
struct xfrm_state *x = NULL;
- u8 mode;
+ int mode;
u32 reqid;
u8 proto;
unsigned short family;
return -EINVAL;
if ((sa2 = ext_hdrs[SADB_X_EXT_SA2-1]) != NULL) {
- mode = sa2->sadb_x_sa2_mode - 1;
+ mode = pfkey_mode_to_xfrm(sa2->sadb_x_sa2_mode);
+ if (mode < 0)
+ return -EINVAL;
reqid = sa2->sadb_x_sa2_reqid;
} else {
mode = 0;
#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
struct sockaddr_in6 *sin6;
#endif
+ int mode;
if (xp->xfrm_nr >= XFRM_MAX_DEPTH)
return -ELOOP;
return -EINVAL;
t->id.proto = rq->sadb_x_ipsecrequest_proto; /* XXX check proto */
- t->mode = rq->sadb_x_ipsecrequest_mode-1;
+ if ((mode = pfkey_mode_to_xfrm(rq->sadb_x_ipsecrequest_mode)) < 0)
+ return -EINVAL;
+ t->mode = mode;
if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_USE)
t->optional = 1;
else if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_UNIQUE) {
return skb;
}
-static void pfkey_xfrm_policy2msg(struct sk_buff *skb, struct xfrm_policy *xp, int dir)
+static int pfkey_xfrm_policy2msg(struct sk_buff *skb, struct xfrm_policy *xp, int dir)
{
struct sadb_msg *hdr;
struct sadb_address *addr;
struct sadb_x_ipsecrequest *rq;
struct xfrm_tmpl *t = xp->xfrm_vec + i;
int req_size;
+ int mode;
req_size = sizeof(struct sadb_x_ipsecrequest);
if (t->mode == XFRM_MODE_TUNNEL)
memset(rq, 0, sizeof(*rq));
rq->sadb_x_ipsecrequest_len = req_size;
rq->sadb_x_ipsecrequest_proto = t->id.proto;
- rq->sadb_x_ipsecrequest_mode = t->mode+1;
+ if ((mode = pfkey_mode_from_xfrm(t->mode)) < 0)
+ return -EINVAL;
+ rq->sadb_x_ipsecrequest_mode = mode;
rq->sadb_x_ipsecrequest_level = IPSEC_LEVEL_REQUIRE;
if (t->reqid)
rq->sadb_x_ipsecrequest_level = IPSEC_LEVEL_UNIQUE;
hdr->sadb_msg_len = size / sizeof(uint64_t);
hdr->sadb_msg_reserved = atomic_read(&xp->refcnt);
+
+ return 0;
}
static int key_notify_policy(struct xfrm_policy *xp, int dir, struct km_event *c)
err = PTR_ERR(out_skb);
goto out;
}
- pfkey_xfrm_policy2msg(out_skb, xp, dir);
+ err = pfkey_xfrm_policy2msg(out_skb, xp, dir);
+ if (err < 0)
+ return err;
out_hdr = (struct sadb_msg *) out_skb->data;
out_hdr->sadb_msg_version = PF_KEY_V2;
err = PTR_ERR(out_skb);
goto out;
}
- pfkey_xfrm_policy2msg(out_skb, xp, dir);
+ err = pfkey_xfrm_policy2msg(out_skb, xp, dir);
+ if (err < 0)
+ goto out;
out_hdr = (struct sadb_msg *) out_skb->data;
out_hdr->sadb_msg_version = hdr->sadb_msg_version;
{
int err;
struct sadb_x_ipsecrequest *rq2;
+ int mode;
if (len <= sizeof(struct sadb_x_ipsecrequest) ||
len < rq1->sadb_x_ipsecrequest_len)
return -EINVAL;
m->proto = rq1->sadb_x_ipsecrequest_proto;
- m->mode = rq1->sadb_x_ipsecrequest_mode - 1;
+ if ((mode = pfkey_mode_to_xfrm(rq1->sadb_x_ipsecrequest_mode)) < 0)
+ return -EINVAL;
+ m->mode = mode;
m->reqid = rq1->sadb_x_ipsecrequest_reqid;
return ((int)(rq1->sadb_x_ipsecrequest_len +
struct pfkey_dump_data *data = ptr;
struct sk_buff *out_skb;
struct sadb_msg *out_hdr;
+ int err;
out_skb = pfkey_xfrm_policy2msg_prep(xp);
if (IS_ERR(out_skb))
return PTR_ERR(out_skb);
- pfkey_xfrm_policy2msg(out_skb, xp, dir);
+ err = pfkey_xfrm_policy2msg(out_skb, xp, dir);
+ if (err < 0)
+ return err;
out_hdr = (struct sadb_msg *) out_skb->data;
out_hdr->sadb_msg_version = data->hdr->sadb_msg_version;
for (i = 0, mp = m; i < num_bundles; i++, mp++) {
/* old ipsecrequest */
- if (set_ipsecrequest(skb, mp->proto, mp->mode + 1,
+ int mode = pfkey_mode_from_xfrm(mp->mode);
+ if (mode < 0)
+ return -EINVAL;
+ if (set_ipsecrequest(skb, mp->proto, mode,
(mp->reqid ? IPSEC_LEVEL_UNIQUE : IPSEC_LEVEL_REQUIRE),
mp->reqid, mp->old_family,
&mp->old_saddr, &mp->old_daddr) < 0) {
}
/* new ipsecrequest */
- if (set_ipsecrequest(skb, mp->proto, mp->mode + 1,
+ if (set_ipsecrequest(skb, mp->proto, mode,
(mp->reqid ? IPSEC_LEVEL_UNIQUE : IPSEC_LEVEL_REQUIRE),
mp->reqid, mp->new_family,
&mp->new_saddr, &mp->new_daddr) < 0) {
tristate 'Connection tracking netlink interface (EXPERIMENTAL)'
depends on EXPERIMENTAL && NF_CONNTRACK && NETFILTER_NETLINK
depends on NF_CONNTRACK!=y || NETFILTER_NETLINK!=m
+ depends on NF_NAT=n || NF_NAT
help
This option enables support for a netlink-based userspace interface
return 0;
netlink_remove(sk);
+ sock_orphan(sk);
nlk = nlk_sk(sk);
spin_lock(&nlk->cb_lock);
/* OK. Socket is unlinked, and, therefore,
no new packets will arrive */
- sock_orphan(sk);
sock->sk = NULL;
wake_up_interruptible_all(&nlk->wait);
return -ECONNREFUSED;
}
nlk = nlk_sk(sk);
- /* A dump is in progress... */
+ /* A dump or destruction is in progress... */
spin_lock(&nlk->cb_lock);
- if (nlk->cb) {
+ if (nlk->cb || sock_flag(sk, SOCK_DEAD)) {
spin_unlock(&nlk->cb_lock);
netlink_destroy_callback(cb);
sock_put(sk);
unsigned char cause, diagnostic;
struct net_device *dev;
ax25_uid_assoc *user;
- int n;
-
- if (sk->sk_state == TCP_ESTABLISHED && sock->state == SS_CONNECTING) {
- sock->state = SS_CONNECTED;
- return 0; /* Connect completed during a ERESTARTSYS event */
- }
-
- if (sk->sk_state == TCP_CLOSE && sock->state == SS_CONNECTING) {
- sock->state = SS_UNCONNECTED;
- return -ECONNREFUSED;
- }
-
- if (sk->sk_state == TCP_ESTABLISHED)
- return -EISCONN; /* No reconnect on a seqpacket socket */
-
- sk->sk_state = TCP_CLOSE;
- sock->state = SS_UNCONNECTED;
+ int n, err = 0;
if (addr_len != sizeof(struct sockaddr_rose) && addr_len != sizeof(struct full_sockaddr_rose))
return -EINVAL;
if ((rose->source_ndigis + addr->srose_ndigis) > ROSE_MAX_DIGIS)
return -EINVAL;
+ lock_sock(sk);
+
+ if (sk->sk_state == TCP_ESTABLISHED && sock->state == SS_CONNECTING) {
+ /* Connect completed during a ERESTARTSYS event */
+ sock->state = SS_CONNECTED;
+ goto out_release;
+ }
+
+ if (sk->sk_state == TCP_CLOSE && sock->state == SS_CONNECTING) {
+ sock->state = SS_UNCONNECTED;
+ err = -ECONNREFUSED;
+ goto out_release;
+ }
+
+ if (sk->sk_state == TCP_ESTABLISHED) {
+ /* No reconnect on a seqpacket socket */
+ err = -EISCONN;
+ goto out_release;
+ }
+
+ sk->sk_state = TCP_CLOSE;
+ sock->state = SS_UNCONNECTED;
+
rose->neighbour = rose_get_neigh(&addr->srose_addr, &cause,
&diagnostic);
if (!rose->neighbour)
return -ENETUNREACH;
rose->lci = rose_new_lci(rose->neighbour);
- if (!rose->lci)
- return -ENETUNREACH;
+ if (!rose->lci) {
+ err = -ENETUNREACH;
+ goto out_release;
+ }
if (sock_flag(sk, SOCK_ZAPPED)) { /* Must bind first - autobinding in this may or may not work */
sock_reset_flag(sk, SOCK_ZAPPED);
- if ((dev = rose_dev_first()) == NULL)
- return -ENETUNREACH;
+ if ((dev = rose_dev_first()) == NULL) {
+ err = -ENETUNREACH;
+ goto out_release;
+ }
user = ax25_findbyuid(current->euid);
- if (!user)
- return -EINVAL;
+ if (!user) {
+ err = -EINVAL;
+ goto out_release;
+ }
memcpy(&rose->source_addr, dev->dev_addr, ROSE_ADDR_LEN);
rose->source_call = user->call;
rose_start_t1timer(sk);
/* Now the loop */
- if (sk->sk_state != TCP_ESTABLISHED && (flags & O_NONBLOCK))
- return -EINPROGRESS;
+ if (sk->sk_state != TCP_ESTABLISHED && (flags & O_NONBLOCK)) {
+ err = -EINPROGRESS;
+ goto out_release;
+ }
/*
* A Connect Ack with Choke or timeout or failed routing will go to
set_current_state(TASK_INTERRUPTIBLE);
if (sk->sk_state != TCP_SYN_SENT)
break;
+ release_sock(sk);
if (!signal_pending(tsk)) {
schedule();
+ lock_sock(sk);
continue;
}
current->state = TASK_RUNNING;
rose->neighbour = rose_get_neigh(&addr->srose_addr, &cause, &diagnostic);
if (rose->neighbour)
goto rose_try_next_neigh;
- /* No more neighbour */
+
+ /* No more neighbours */
sock->state = SS_UNCONNECTED;
- return sock_error(sk); /* Always set at this point */
+ err = sock_error(sk); /* Always set at this point */
+ goto out_release;
}
sock->state = SS_CONNECTED;
- return 0;
+out_release:
+ release_sock(sk);
+
+ return err;
}
static int rose_accept(struct socket *sock, struct socket *newsock, int flags)
lock_sock(sk);
continue;
}
+ current->state = TASK_RUNNING;
+ remove_wait_queue(sk->sk_sleep, &wait);
return -ERESTARTSYS;
}
current->state = TASK_RUNNING;
obj-$(CONFIG_NET_SCH_FIFO) += sch_fifo.o
obj-$(CONFIG_NET_SCH_CBQ) += sch_cbq.o
obj-$(CONFIG_NET_SCH_HTB) += sch_htb.o
-obj-$(CONFIG_NET_SCH_HPFQ) += sch_hpfq.o
obj-$(CONFIG_NET_SCH_HFSC) += sch_hfsc.o
obj-$(CONFIG_NET_SCH_RED) += sch_red.o
obj-$(CONFIG_NET_SCH_GRED) += sch_gred.o
skb2->tc_verd = SET_TC_FROM(skb2->tc_verd, at);
skb2->dev = dev;
- skb2->input_dev = skb->dev;
+ skb2->iif = skb->dev->ifindex;
dev_queue_xmit(skb2);
spin_unlock(&m->tcf_lock);
return m->tcf_action;
static int basic_init(struct tcf_proto *tp)
{
+ struct basic_head *head;
+
+ head = kzalloc(sizeof(*head), GFP_KERNEL);
+ if (head == NULL)
+ return -ENOBUFS;
+ INIT_LIST_HEAD(&head->flist);
+ tp->root = head;
return 0;
}
list_del(&f->link);
basic_delete_filter(tp, f);
}
+ kfree(head);
}
static int basic_delete(struct tcf_proto *tp, unsigned long arg)
}
err = -ENOBUFS;
- if (head == NULL) {
- head = kzalloc(sizeof(*head), GFP_KERNEL);
- if (head == NULL)
- goto errout;
-
- INIT_LIST_HEAD(&head->flist);
- tp->root = head;
- }
-
f = kzalloc(sizeof(*f), GFP_KERNEL);
if (f == NULL)
goto errout;
spin_unlock_bh(&dev->queue_lock);
}
-static void __inline__
+static inline void
route4_set_fastmap(struct route4_head *head, u32 id, int iif,
struct route4_filter *f)
{
}
if (tb[TCA_TCINDEX_SHIFT-1]) {
- if (RTA_PAYLOAD(tb[TCA_TCINDEX_SHIFT-1]) < sizeof(u16))
+ if (RTA_PAYLOAD(tb[TCA_TCINDEX_SHIFT-1]) < sizeof(int))
goto errout;
- cp.shift = *(u16 *) RTA_DATA(tb[TCA_TCINDEX_SHIFT-1]);
+ cp.shift = *(int *) RTA_DATA(tb[TCA_TCINDEX_SHIFT-1]);
}
err = -EBUSY;
sch_tree_lock(sch);
- list_del(&cl->hlist);
list_del(&cl->siblings);
hfsc_adjust_levels(cl->cl_parent);
+
hfsc_purge_queue(sch, cl);
+ list_del(&cl->hlist);
+
if (--cl->refcnt == 0)
hfsc_destroy_class(sch, cl);
sch_tree_lock(sch);
- /* delete from hash and active; remainder in destroy_class */
- hlist_del_init(&cl->hlist);
-
if (!cl->level) {
qlen = cl->un.leaf.q->q.qlen;
qdisc_reset(cl->un.leaf.q);
qdisc_tree_decrease_qlen(cl->un.leaf.q, qlen);
}
+ /* delete from hash and active; remainder in destroy_class */
+ hlist_del_init(&cl->hlist);
+
if (cl->prio_activity)
htb_deactivate(q, cl);
trans = list_entry(pos, struct sctp_transport, transports);
if (!sctp_assoc_lookup_paddr(new, &trans->ipaddr))
sctp_assoc_del_peer(asoc, &trans->ipaddr);
+
+ if (asoc->state >= SCTP_STATE_ESTABLISHED)
+ sctp_transport_reset(trans);
}
/* If the case is A (association restart), use
*/
sctp_ssnmap_clear(asoc->ssnmap);
+ /* Flush the ULP reassembly and ordered queue.
+ * Any data there will now be stale and will
+ * cause problems.
+ */
+ sctp_ulpq_flush(&asoc->ulpq);
+
+ /* reset the overall association error count so
+ * that the restarted association doesn't get torn
+ * down on the next retransmission timer.
+ */
+ asoc->overall_error_count = 0;
+
} else {
/* Add any peer addresses from the new association. */
list_for_each(pos, &new->peer.transport_addr_list) {
void *arg,
sctp_cmd_seq_t *commands)
{
- return sctp_sf_heartbeat(ep, asoc, type, (struct sctp_transport *)arg,
- commands);
+ if (SCTP_DISPOSITION_NOMEM == sctp_sf_heartbeat(ep, asoc, type,
+ (struct sctp_transport *)arg, commands))
+ return SCTP_DISPOSITION_NOMEM;
+
+ /*
+ * RFC 2960 (bis), section 8.3
+ *
+ * D) Request an on-demand HEARTBEAT on a specific destination
+ * transport address of a given association.
+ *
+ * The endpoint should increment the respective error counter of
+ * the destination transport address each time a HEARTBEAT is sent
+ * to that address and not acknowledged within one RTO.
+ *
+ */
+ sctp_add_cmd_sf(commands, SCTP_CMD_TRANSPORT_RESET,
+ SCTP_TRANSPORT(arg));
+ return SCTP_DISPOSITION_CONSUME;
}
/*
retval = -EINVAL;
goto err_bindx_rem;
}
+
+ if (!af->addr_valid(sa_addr, sp, NULL)) {
+ retval = -EADDRNOTAVAIL;
+ goto err_bindx_rem;
+ }
+
if (sa_addr->v4.sin_port != htons(bp->port)) {
retval = -EINVAL;
goto err_bindx_rem;
finish_wait(sk->sk_sleep, &wait);
}
+static void sctp_sock_rfree_frag(struct sk_buff *skb)
+{
+ struct sk_buff *frag;
+
+ if (!skb->data_len)
+ goto done;
+
+ /* Don't forget the fragments. */
+ for (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next)
+ sctp_sock_rfree_frag(frag);
+
+done:
+ sctp_sock_rfree(skb);
+}
+
+static void sctp_skb_set_owner_r_frag(struct sk_buff *skb, struct sock *sk)
+{
+ struct sk_buff *frag;
+
+ if (!skb->data_len)
+ goto done;
+
+ /* Don't forget the fragments. */
+ for (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next)
+ sctp_skb_set_owner_r_frag(frag, sk);
+
+done:
+ sctp_skb_set_owner_r(skb, sk);
+}
+
/* Populate the fields of the newsk from the oldsk and migrate the assoc
* and its messages to the newsk.
*/
sctp_skb_for_each(skb, &oldsk->sk_receive_queue, tmp) {
event = sctp_skb2event(skb);
if (event->asoc == assoc) {
- sctp_sock_rfree(skb);
+ sctp_sock_rfree_frag(skb);
__skb_unlink(skb, &oldsk->sk_receive_queue);
__skb_queue_tail(&newsk->sk_receive_queue, skb);
- sctp_skb_set_owner_r(skb, newsk);
+ sctp_skb_set_owner_r_frag(skb, newsk);
}
}
sctp_skb_for_each(skb, &oldsp->pd_lobby, tmp) {
event = sctp_skb2event(skb);
if (event->asoc == assoc) {
- sctp_sock_rfree(skb);
+ sctp_sock_rfree_frag(skb);
__skb_unlink(skb, &oldsp->pd_lobby);
__skb_queue_tail(queue, skb);
- sctp_skb_set_owner_r(skb, newsk);
+ sctp_skb_set_owner_r_frag(skb, newsk);
}
}
}
+ sctp_skb_for_each(skb, &assoc->ulpq.reasm, tmp) {
+ sctp_sock_rfree_frag(skb);
+ sctp_skb_set_owner_r_frag(skb, newsk);
+ }
+
+ sctp_skb_for_each(skb, &assoc->ulpq.lobby, tmp) {
+ sctp_sock_rfree_frag(skb);
+ sctp_skb_set_owner_r_frag(skb, newsk);
+ }
+
/* Set the type of socket to indicate that it is peeled off from the
* original UDP-style socket or created with the accept() call on a
* TCP-style socket..
timeout += jiffies;
return timeout;
}
+
+/* Reset transport variables to their initial values */
+void sctp_transport_reset(struct sctp_transport *t)
+{
+ struct sctp_association *asoc = t->asoc;
+
+ /* RFC 2960 (bis), Section 5.2.4
+ * All the congestion control parameters (e.g., cwnd, ssthresh)
+ * related to this peer MUST be reset to their initial values
+ * (see Section 6.2.1)
+ */
+ t->cwnd = min(4*asoc->pathmtu, max_t(__u32, 2*asoc->pathmtu, 4380));
+ t->ssthresh = asoc->peer.i.a_rwnd;
+ t->rto = asoc->rto_initial;
+ t->rtt = 0;
+ t->srtt = 0;
+ t->rttvar = 0;
+
+ /* Reset these additional varibles so that we have a clean
+ * slate.
+ */
+ t->partial_bytes_acked = 0;
+ t->flight_size = 0;
+ t->error_count = 0;
+ t->rto_pending = 0;
+
+ /* Initialize the state information for SFR-CACC */
+ t->cacc.changeover_active = 0;
+ t->cacc.cycling_changeover = 0;
+ t->cacc.next_tsn_at_change = 0;
+ t->cacc.cacc_saw_newack = 0;
+}
/* Flush the reassembly and ordering queues. */
-static void sctp_ulpq_flush(struct sctp_ulpq *ulpq)
+void sctp_ulpq_flush(struct sctp_ulpq *ulpq)
{
struct sk_buff *skb;
struct sctp_ulpevent *event;
if (!sctp_sk(sk)->pd_mode) {
queue = &sk->sk_receive_queue;
} else if (ulpq->pd_mode) {
- if (event->msg_flags & MSG_NOTIFICATION)
+ /* If the association is in partial delivery, we
+ * need to finish delivering the partially processed
+ * packet before passing any other data. This is
+ * because we don't truly support stream interleaving.
+ */
+ if ((event->msg_flags & MSG_NOTIFICATION) ||
+ (SCTP_DATA_NOT_FRAG ==
+ (event->msg_flags & SCTP_DATA_FRAG_MASK)))
queue = &sctp_sk(sk)->pd_lobby;
else {
clear_pd = event->msg_flags & MSG_EOR;
err = sock_attach_fd(newsock, newfile);
if (err < 0)
- goto out_fd;
+ goto out_fd_simple;
err = security_socket_accept(sock, newsock);
if (err)
fput_light(sock->file, fput_needed);
out:
return err;
+out_fd_simple:
+ sock_release(newsock);
+ put_filp(newfile);
+ put_unused_fd(newfd);
+ goto out_put;
out_fd:
fput(newfile);
put_unused_fd(newfd);
rpc_delay(task, 3*HZ);
case -ETIMEDOUT:
task->tk_action = call_timeout;
+ if (task->tk_client->cl_discrtry)
+ xprt_disconnect(task->tk_xprt);
break;
case -ECONNREFUSED:
case -ENOTCONN:
out_retry:
req->rq_received = req->rq_private_buf.len = 0;
task->tk_status = 0;
+ if (task->tk_client->cl_discrtry)
+ xprt_disconnect(task->tk_xprt);
}
/*
static inline struct ip_map *
ip_map_cached_get(struct svc_rqst *rqstp)
{
- struct ip_map *ipm = rqstp->rq_sock->sk_info_authunix;
+ struct ip_map *ipm;
+ struct svc_sock *svsk = rqstp->rq_sock;
+ spin_lock_bh(&svsk->sk_defer_lock);
+ ipm = svsk->sk_info_authunix;
if (ipm != NULL) {
if (!cache_valid(&ipm->h)) {
/*
* remembered, e.g. by a second mount from the
* same IP address.
*/
- rqstp->rq_sock->sk_info_authunix = NULL;
+ svsk->sk_info_authunix = NULL;
+ spin_unlock_bh(&svsk->sk_defer_lock);
cache_put(&ipm->h, &ip_map_cache);
return NULL;
}
cache_get(&ipm->h);
}
+ spin_unlock_bh(&svsk->sk_defer_lock);
return ipm;
}
{
struct svc_sock *svsk = rqstp->rq_sock;
- if (svsk->sk_sock->type == SOCK_STREAM && svsk->sk_info_authunix == NULL)
- svsk->sk_info_authunix = ipm; /* newly cached, keep the reference */
- else
+ spin_lock_bh(&svsk->sk_defer_lock);
+ if (svsk->sk_sock->type == SOCK_STREAM &&
+ svsk->sk_info_authunix == NULL) {
+ /* newly cached, keep the reference */
+ svsk->sk_info_authunix = ipm;
+ ipm = NULL;
+ }
+ spin_unlock_bh(&svsk->sk_defer_lock);
+ if (ipm)
cache_put(&ipm->h, &ip_map_cache);
}
struct in_pktinfo pkti;
struct in6_pktinfo pkti6;
};
+#define SVC_PKTINFO_SPACE \
+ CMSG_SPACE(sizeof(union svc_pktinfo_u))
static void svc_set_cmsg_data(struct svc_rqst *rqstp, struct cmsghdr *cmh)
{
struct svc_sock *svsk = rqstp->rq_sock;
struct socket *sock = svsk->sk_sock;
int slen;
- char buffer[CMSG_SPACE(sizeof(union svc_pktinfo_u))];
- struct cmsghdr *cmh = (struct cmsghdr *)buffer;
+ union {
+ struct cmsghdr hdr;
+ long all[SVC_PKTINFO_SPACE / sizeof(long)];
+ } buffer;
+ struct cmsghdr *cmh = &buffer.hdr;
int len = 0;
int result;
int size;
struct svc_sock *svsk = rqstp->rq_sock;
struct svc_serv *serv = svsk->sk_server;
struct sk_buff *skb;
- char buffer[CMSG_SPACE(sizeof(union svc_pktinfo_u))];
- struct cmsghdr *cmh = (struct cmsghdr *)buffer;
+ union {
+ struct cmsghdr hdr;
+ long all[SVC_PKTINFO_SPACE / sizeof(long)];
+ } buffer;
+ struct cmsghdr *cmh = &buffer.hdr;
int err, len;
struct msghdr msg = {
.msg_name = svc_addr(rqstp),
}
clear_bit(SK_DATA, &svsk->sk_flags);
- while ((err == kernel_recvmsg(svsk->sk_sock, &msg, NULL,
- 0, 0, MSG_PEEK | MSG_DONTWAIT)) < 0 ||
+ while ((err = kernel_recvmsg(svsk->sk_sock, &msg, NULL,
+ 0, 0, MSG_PEEK | MSG_DONTWAIT)) < 0 ||
(skb = skb_recv_datagram(svsk->sk_sk, 0, 1, &err)) == NULL) {
if (err == -EAGAIN) {
svc_sock_received(svsk);
xprt_reset_majortimeo(req);
/* Turn off autodisconnect */
del_singleshot_timer_sync(&xprt->timer);
- } else {
- /* If all request bytes have been sent,
- * then we must be retransmitting this one */
- if (!req->rq_bytes_sent) {
- if (task->tk_client->cl_discrtry) {
- xprt_disconnect(xprt);
- task->tk_status = -ENOTCONN;
- return;
- }
- }
}
} else if (!req->rq_bytes_sent)
return;
+++ /dev/null
-/*****************************************************************************
-* af_wanpipe.c WANPIPE(tm) Secure Socket Layer.
-*
-* Author: Nenad Corbic <ncorbic@sangoma.com>
-*
-* Copyright: (c) 2000 Sangoma Technologies Inc.
-*
-* This program is free software; you can redistribute it and/or
-* modify it under the terms of the GNU General Public License
-* as published by the Free Software Foundation; either version
-* 2 of the License, or (at your option) any later version.
-* ============================================================================
-* Due Credit:
-* Wanpipe socket layer is based on Packet and
-* the X25 socket layers. The above sockets were
-* used for the specific use of Sangoma Technologies
-* API programs.
-* Packet socket Authors: Ross Biro, Fred N. van Kempen and
-* Alan Cox.
-* X25 socket Author: Jonathan Naylor.
-* ============================================================================
-* Mar 15, 2002 Arnaldo C. Melo o Use wp_sk()->num, as it isnt anymore in sock
-* Apr 25, 2000 Nenad Corbic o Added the ability to send zero length packets.
-* Mar 13, 2000 Nenad Corbic o Added a tx buffer check via ioctl call.
-* Mar 06, 2000 Nenad Corbic o Fixed the corrupt sock lcn problem.
-* Server and client application can run
-* simultaneously without conflicts.
-* Feb 29, 2000 Nenad Corbic o Added support for PVC protocols, such as
-* CHDLC, Frame Relay and HDLC API.
-* Jan 17, 2000 Nenad Corbic o Initial version, based on AF_PACKET socket.
-* X25API support only.
-*
-******************************************************************************/
-
-#include <linux/types.h>
-#include <linux/sched.h>
-#include <linux/mm.h>
-#include <linux/capability.h>
-#include <linux/fcntl.h>
-#include <linux/socket.h>
-#include <linux/in.h>
-#include <linux/inet.h>
-#include <linux/netdevice.h>
-#include <linux/poll.h>
-#include <linux/wireless.h>
-#include <linux/kmod.h>
-#include <net/ip.h>
-#include <net/protocol.h>
-#include <linux/skbuff.h>
-#include <net/sock.h>
-#include <linux/errno.h>
-#include <linux/timer.h>
-#include <asm/system.h>
-#include <asm/uaccess.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/if_wanpipe.h>
-#include <linux/pkt_sched.h>
-#include <linux/tcp_states.h>
-#include <linux/if_wanpipe_common.h>
-
-#ifdef CONFIG_INET
-#include <net/inet_common.h>
-#endif
-
-#define SLOW_BACKOFF 0.1*HZ
-#define FAST_BACKOFF 0.01*HZ
-
-//#define PRINT_DEBUG
-#ifdef PRINT_DEBUG
- #define DBG_PRINTK(format, a...) printk(format, ## a)
-#else
- #define DBG_PRINTK(format, a...)
-#endif
-
-
-/* SECURE SOCKET IMPLEMENTATION
- *
- * TRANSMIT:
- *
- * When the user sends a packet via send() system call
- * the wanpipe_sendmsg() function is executed.
- *
- * Each packet is enqueud into sk->sk_write_queue transmit
- * queue. When the packet is enqueued, a delayed transmit
- * timer is triggerd which acts as a Bottom Half hander.
- *
- * wanpipe_delay_transmit() function (BH), dequeues packets
- * from the sk->sk_write_queue transmit queue and sends it
- * to the deriver via dev->hard_start_xmit(skb, dev) function.
- * Note, this function is actual a function pointer of if_send()
- * routine in the wanpipe driver.
- *
- * X25API GUARANTEED DELIVERY:
- *
- * In order to provide 100% guaranteed packet delivery,
- * an atomic 'packet_sent' counter is implemented. Counter
- * is incremented for each packet enqueued
- * into sk->sk_write_queue. Counter is decremented each
- * time wanpipe_delayed_transmit() function successfuly
- * passes the packet to the driver. Before each send(), a poll
- * routine checks the sock resources The maximum value of
- * packet sent counter is 1, thus if one packet is queued, the
- * application will block until that packet is passed to the
- * driver.
- *
- * RECEIVE:
- *
- * Wanpipe device drivers call the socket bottom half
- * function, wanpipe_rcv() to queue the incoming packets
- * into an AF_WANPIPE socket queue. Based on wanpipe_rcv()
- * return code, the driver knows whether the packet was
- * successfully queued. If the socket queue is full,
- * protocol flow control is used by the driver, if any,
- * to slow down the traffic until the sock queue is free.
- *
- * Every time a packet arrives into a socket queue the
- * socket wakes up processes which are waiting to receive
- * data.
- *
- * If the socket queue is full, the driver sets a block
- * bit which signals the socket to kick the wanpipe driver
- * bottom half hander when the socket queue is partialy
- * empty. wanpipe_recvmsg() function performs this action.
- *
- * In case of x25api, packets will never be dropped, since
- * flow control is available.
- *
- * In case of streaming protocols like CHDLC, packets will
- * be dropped but the statistics will be generated.
- */
-
-
-/* The code below is used to test memory leaks. It prints out
- * a message every time kmalloc and kfree system calls get executed.
- * If the calls match there is no leak :)
- */
-
-/***********FOR DEBUGGING PURPOSES*********************************************
-#define KMEM_SAFETYZONE 8
-
-static void * dbg_kmalloc(unsigned int size, int prio, int line) {
- void * v = kmalloc(size,prio);
- printk(KERN_INFO "line %d kmalloc(%d,%d) = %p\n",line,size,prio,v);
- return v;
-}
-static void dbg_kfree(void * v, int line) {
- printk(KERN_INFO "line %d kfree(%p)\n",line,v);
- kfree(v);
-}
-
-#define kmalloc(x,y) dbg_kmalloc(x,y,__LINE__)
-#define kfree(x) dbg_kfree(x,__LINE__)
-******************************************************************************/
-
-
-/* List of all wanpipe sockets. */
-HLIST_HEAD(wanpipe_sklist);
-static DEFINE_RWLOCK(wanpipe_sklist_lock);
-
-atomic_t wanpipe_socks_nr;
-static unsigned long wanpipe_tx_critical;
-
-#if 0
-/* Private wanpipe socket structures. */
-struct wanpipe_opt
-{
- void *mbox; /* Mail box */
- void *card; /* Card bouded to */
- struct net_device *dev; /* Bounded device */
- unsigned short lcn; /* Binded LCN */
- unsigned char svc; /* 0=pvc, 1=svc */
- unsigned char timer; /* flag for delayed transmit*/
- struct timer_list tx_timer;
- unsigned poll_cnt;
- unsigned char force; /* Used to force sock release */
- atomic_t packet_sent;
-};
-#endif
-
-static int sk_count;
-extern const struct proto_ops wanpipe_ops;
-static unsigned long find_free_critical;
-
-static void wanpipe_unlink_driver(struct sock *sk);
-static void wanpipe_link_driver(struct net_device *dev, struct sock *sk);
-static void wanpipe_wakeup_driver(struct sock *sk);
-static int execute_command(struct sock *, unsigned char, unsigned int);
-static int check_dev(struct net_device *dev, sdla_t *card);
-struct net_device *wanpipe_find_free_dev(sdla_t *card);
-static void wanpipe_unlink_card (struct sock *);
-static int wanpipe_link_card (struct sock *);
-static struct sock *wanpipe_make_new(struct sock *);
-static struct sock *wanpipe_alloc_socket(void);
-static inline int get_atomic_device(struct net_device *dev);
-static int wanpipe_exec_cmd(struct sock *, int, unsigned int);
-static int get_ioctl_cmd (struct sock *, void *);
-static int set_ioctl_cmd (struct sock *, void *);
-static void release_device(struct net_device *dev);
-static void wanpipe_kill_sock_timer (unsigned long data);
-static void wanpipe_kill_sock_irq (struct sock *);
-static void wanpipe_kill_sock_accept (struct sock *);
-static int wanpipe_do_bind(struct sock *sk, struct net_device *dev,
- int protocol);
-struct sock * get_newsk_from_skb (struct sk_buff *);
-static int wanpipe_debug (struct sock *, void *);
-static void wanpipe_delayed_transmit (unsigned long data);
-static void release_driver(struct sock *);
-static void start_cleanup_timer (struct sock *);
-static void check_write_queue(struct sock *);
-static int check_driver_busy (struct sock *);
-
-/*============================================================
- * wanpipe_rcv
- *
- * Wanpipe socket bottom half handler. This function
- * is called by the WANPIPE device drivers to queue a
- * incoming packet into the socket receive queue.
- * Once the packet is queued, all processes waiting to
- * read are woken up.
- *
- * During socket bind, this function is bounded into
- * WANPIPE driver private.
- *===========================================================*/
-
-static int wanpipe_rcv(struct sk_buff *skb, struct net_device *dev,
- struct sock *sk)
-{
- struct wan_sockaddr_ll *sll = (struct wan_sockaddr_ll*)skb->cb;
- wanpipe_common_t *chan = dev->priv;
- /*
- * When we registered the protocol we saved the socket in the data
- * field for just this event.
- */
-
- skb->dev = dev;
-
- sll->sll_family = AF_WANPIPE;
- sll->sll_hatype = dev->type;
- sll->sll_protocol = skb->protocol;
- sll->sll_pkttype = skb->pkt_type;
- sll->sll_ifindex = dev->ifindex;
- sll->sll_halen = 0;
-
- if (dev->hard_header_parse)
- sll->sll_halen = dev->hard_header_parse(skb, sll->sll_addr);
-
- /*
- * WAN_PACKET_DATA : Data which should be passed up the receive queue.
- * WAN_PACKET_ASYC : Asynchronous data like place call, which should
- * be passed up the listening sock.
- * WAN_PACKET_ERR : Asynchronous data like clear call or restart
- * which should go into an error queue.
- */
- switch (skb->pkt_type){
-
- case WAN_PACKET_DATA:
- if (sock_queue_rcv_skb(sk,skb)<0){
- return -ENOMEM;
- }
- break;
- case WAN_PACKET_CMD:
- sk->sk_state = chan->state;
- /* Bug fix: update Mar6.
- * Do not set the sock lcn number here, since
- * cmd is not guaranteed to be executed on the
- * board, thus Lcn could be wrong */
- sk->sk_data_ready(sk, skb->len);
- kfree_skb(skb);
- break;
- case WAN_PACKET_ERR:
- sk->sk_state = chan->state;
- if (sock_queue_err_skb(sk,skb)<0){
- return -ENOMEM;
- }
- break;
- default:
- printk(KERN_INFO "wansock: BH Illegal Packet Type Dropping\n");
- kfree_skb(skb);
- break;
- }
-
-//??????????????????????
-// if (sk->sk_state == WANSOCK_DISCONNECTED){
-// if (sk->sk_zapped) {
-// //printk(KERN_INFO "wansock: Disconnected, killing early\n");
-// wanpipe_unlink_driver(sk);
-// sk->sk_bound_dev_if = 0;
-// }
-// }
-
- return 0;
-}
-
-/*============================================================
- * wanpipe_listen_rcv
- *
- * Wanpipe LISTEN socket bottom half handler. This function
- * is called by the WANPIPE device drivers to queue an
- * incoming call into the socket listening queue.
- * Once the packet is queued, the waiting accept() process
- * is woken up.
- *
- * During socket bind, this function is bounded into
- * WANPIPE driver private.
- *
- * IMPORTANT NOTE:
- * The accept call() is waiting for an skb packet
- * which contains a pointer to a device structure.
- *
- * When we do a bind to a device structre, we
- * bind a newly created socket into "chan->sk". Thus,
- * when accept receives the skb packet, it will know
- * from which dev it came form, and in turn it will know
- * the address of the new sock.
- *
- * NOTE: This function gets called from driver ISR.
- *===========================================================*/
-
-static int wanpipe_listen_rcv (struct sk_buff *skb, struct sock *sk)
-{
- wanpipe_opt *wp = wp_sk(sk), *newwp;
- struct wan_sockaddr_ll *sll = (struct wan_sockaddr_ll*)skb->cb;
- struct sock *newsk;
- struct net_device *dev;
- sdla_t *card;
- mbox_cmd_t *mbox_ptr;
- wanpipe_common_t *chan;
-
- /* Find a free device, if none found, all svc's are busy
- */
-
- card = (sdla_t*)wp->card;
- if (!card){
- printk(KERN_INFO "wansock: LISTEN ERROR, No Card\n");
- return -ENODEV;
- }
-
- dev = wanpipe_find_free_dev(card);
- if (!dev){
- printk(KERN_INFO "wansock: LISTEN ERROR, No Free Device\n");
- return -ENODEV;
- }
-
- chan=dev->priv;
- chan->state = WANSOCK_CONNECTING;
-
- /* Allocate a new sock, which accept will bind
- * and pass up to the user
- */
- if ((newsk = wanpipe_make_new(sk)) == NULL){
- release_device(dev);
- return -ENOMEM;
- }
-
-
- /* Initialize the new sock structure
- */
- newsk->sk_bound_dev_if = dev->ifindex;
- newwp = wp_sk(newsk);
- newwp->card = wp->card;
-
- /* Insert the sock into the main wanpipe
- * sock list.
- */
- atomic_inc(&wanpipe_socks_nr);
-
- /* Allocate and fill in the new Mail Box. Then
- * bind the mail box to the sock. It will be
- * used by the ioctl call to read call information
- * and to execute commands.
- */
- if ((mbox_ptr = kzalloc(sizeof(mbox_cmd_t), GFP_ATOMIC)) == NULL) {
- wanpipe_kill_sock_irq (newsk);
- release_device(dev);
- return -ENOMEM;
- }
- memcpy(mbox_ptr,skb->data,skb->len);
-
- /* Register the lcn on which incoming call came
- * from. Thus, if we have to clear it, we know
- * which lcn to clear
- */
-
- newwp->lcn = mbox_ptr->cmd.lcn;
- newwp->mbox = (void *)mbox_ptr;
-
- DBG_PRINTK(KERN_INFO "NEWSOCK : Device %s, bind to lcn %i\n",
- dev->name,mbox_ptr->cmd.lcn);
-
- chan->lcn = mbox_ptr->cmd.lcn;
- card->u.x.svc_to_dev_map[(chan->lcn%MAX_X25_LCN)] = dev;
-
- sock_reset_flag(newsk, SOCK_ZAPPED);
- newwp->num = htons(X25_PROT);
-
- if (wanpipe_do_bind(newsk, dev, newwp->num)) {
- wanpipe_kill_sock_irq (newsk);
- release_device(dev);
- return -EINVAL;
- }
- newsk->sk_state = WANSOCK_CONNECTING;
-
-
- /* Fill in the standard sock address info */
-
- sll->sll_family = AF_WANPIPE;
- sll->sll_hatype = dev->type;
- sll->sll_protocol = skb->protocol;
- sll->sll_pkttype = skb->pkt_type;
- sll->sll_ifindex = dev->ifindex;
- sll->sll_halen = 0;
-
- skb->dev = dev;
- sk->sk_ack_backlog++;
-
- /* We must do this manually, since the sock_queue_rcv_skb()
- * function sets the skb->dev to NULL. However, we use
- * the dev field in the accept function.*/
- if (atomic_read(&sk->sk_rmem_alloc) + skb->truesize >=
- (unsigned)sk->sk_rcvbuf) {
-
- wanpipe_unlink_driver(newsk);
- wanpipe_kill_sock_irq (newsk);
- --sk->sk_ack_backlog;
- return -ENOMEM;
- }
-
- skb_set_owner_r(skb, sk);
- skb_queue_tail(&sk->sk_receive_queue, skb);
- sk->sk_data_ready(sk, skb->len);
-
- return 0;
-}
-
-
-
-/*============================================================
- * wanpipe_make_new
- *
- * Create a new sock, and allocate a wanpipe private
- * structure to it. Also, copy the important data
- * from the original sock to the new sock.
- *
- * This function is used by wanpipe_listen_rcv() listen
- * bottom half handler. A copy of the listening sock
- * is created using this function.
- *
- *===========================================================*/
-
-static struct sock *wanpipe_make_new(struct sock *osk)
-{
- struct sock *sk;
-
- if (osk->sk_type != SOCK_RAW)
- return NULL;
-
- if ((sk = wanpipe_alloc_socket()) == NULL)
- return NULL;
-
- sk->sk_type = osk->sk_type;
- sk->sk_socket = osk->sk_socket;
- sk->sk_priority = osk->sk_priority;
- sk->sk_protocol = osk->sk_protocol;
- wp_sk(sk)->num = wp_sk(osk)->num;
- sk->sk_rcvbuf = osk->sk_rcvbuf;
- sk->sk_sndbuf = osk->sk_sndbuf;
- sk->sk_state = WANSOCK_CONNECTING;
- sk->sk_sleep = osk->sk_sleep;
-
- if (sock_flag(osk, SOCK_DBG))
- sock_set_flag(sk, SOCK_DBG);
-
- return sk;
-}
-
-/*
- * FIXME: wanpipe_opt has to include a sock in its definition and stop using
- * sk_protinfo, but this code is not even compilable now, so lets leave it for
- * later.
- */
-static struct proto wanpipe_proto = {
- .name = "WANPIPE",
- .owner = THIS_MODULE,
- .obj_size = sizeof(struct sock),
-};
-
-/*============================================================
- * wanpipe_make_new
- *
- * Allocate memory for the a new sock, and sock
- * private data.
- *
- * Increment the module use count.
- *
- * This function is used by wanpipe_create() and
- * wanpipe_make_new() functions.
- *
- *===========================================================*/
-
-static struct sock *wanpipe_alloc_socket(void)
-{
- struct sock *sk;
- struct wanpipe_opt *wan_opt;
-
- if ((sk = sk_alloc(PF_WANPIPE, GFP_ATOMIC, &wanpipe_proto, 1)) == NULL)
- return NULL;
-
- if ((wan_opt = kzalloc(sizeof(struct wanpipe_opt), GFP_ATOMIC)) == NULL) {
- sk_free(sk);
- return NULL;
- }
-
- wp_sk(sk) = wan_opt;
-
- /* Use timer to send data to the driver. This will act
- * as a BH handler for sendmsg functions */
- init_timer(&wan_opt->tx_timer);
- wan_opt->tx_timer.data = (unsigned long)sk;
- wan_opt->tx_timer.function = wanpipe_delayed_transmit;
-
- sock_init_data(NULL, sk);
- return sk;
-}
-
-
-/*============================================================
- * wanpipe_sendmsg
- *
- * This function implements a sendto() system call,
- * for AF_WANPIPE socket family.
- * During socket bind() sk->sk_bound_dev_if is initialized
- * to a correct network device. This number is used
- * to find a network device to which the packet should
- * be passed to.
- *
- * Each packet is queued into sk->sk_write_queue and
- * delayed transmit bottom half handler is marked for
- * execution.
- *
- * A socket must be in WANSOCK_CONNECTED state before
- * a packet is queued into sk->sk_write_queue.
- *===========================================================*/
-
-static int wanpipe_sendmsg(struct kiocb *iocb, struct socket *sock,
- struct msghdr *msg, int len)
-{
- wanpipe_opt *wp;
- struct sock *sk = sock->sk;
- struct wan_sockaddr_ll *saddr=(struct wan_sockaddr_ll *)msg->msg_name;
- struct sk_buff *skb;
- struct net_device *dev;
- unsigned short proto;
- unsigned char *addr;
- int ifindex, err, reserve = 0;
-
-
- if (!sock_flag(sk, SOCK_ZAPPED))
- return -ENETDOWN;
-
- if (sk->sk_state != WANSOCK_CONNECTED)
- return -ENOTCONN;
-
- if (msg->msg_flags & ~(MSG_DONTWAIT|MSG_CMSG_COMPAT))
- return(-EINVAL);
-
- /* it was <=, now one can send
- * zero length packets */
- if (len < sizeof(x25api_hdr_t))
- return -EINVAL;
-
- wp = wp_sk(sk);
-
- if (saddr == NULL) {
- ifindex = sk->sk_bound_dev_if;
- proto = wp->num;
- addr = NULL;
-
- }else{
- if (msg->msg_namelen < sizeof(struct wan_sockaddr_ll)){
- return -EINVAL;
- }
-
- ifindex = sk->sk_bound_dev_if;
- proto = saddr->sll_protocol;
- addr = saddr->sll_addr;
- }
-
- dev = dev_get_by_index(ifindex);
- if (dev == NULL){
- printk(KERN_INFO "wansock: Send failed, dev index: %i\n",ifindex);
- return -ENXIO;
- }
- dev_put(dev);
-
- if (sock->type == SOCK_RAW)
- reserve = dev->hard_header_len;
-
- if (len > dev->mtu+reserve){
- return -EMSGSIZE;
- }
-
- skb = sock_alloc_send_skb(sk, len + LL_RESERVED_SPACE(dev),
- msg->msg_flags & MSG_DONTWAIT, &err);
-
- if (skb==NULL){
- goto out_unlock;
- }
-
- skb_reserve(skb, LL_RESERVED_SPACE(dev));
- skb->nh.raw = skb->data;
-
- /* Returns -EFAULT on error */
- err = memcpy_fromiovec(skb_put(skb,len), msg->msg_iov, len);
- if (err){
- goto out_free;
- }
-
- if (dev->hard_header) {
- int res;
- err = -EINVAL;
- res = dev->hard_header(skb, dev, ntohs(proto), addr, NULL, len);
- if (res<0){
- goto out_free;
- }
- }
-
- skb->protocol = proto;
- skb->dev = dev;
- skb->priority = sk->sk_priority;
- skb->pkt_type = WAN_PACKET_DATA;
-
- err = -ENETDOWN;
- if (!(dev->flags & IFF_UP))
- goto out_free;
-
- if (atomic_read(&sk->sk_wmem_alloc) + skb->truesize >
- (unsigned int)sk->sk_sndbuf){
- kfree_skb(skb);
- return -ENOBUFS;
- }
-
- skb_queue_tail(&sk->sk_write_queue,skb);
- atomic_inc(&wp->packet_sent);
-
- if (!(test_and_set_bit(0, &wp->timer)))
- mod_timer(&wp->tx_timer, jiffies + 1);
-
- return(len);
-
-out_free:
- kfree_skb(skb);
-out_unlock:
- return err;
-}
-
-/*============================================================
- * wanpipe_delayed_tarnsmit
- *
- * Transmit bottom half handler. It dequeues packets
- * from sk->sk_write_queue and passes them to the
- * driver. If the driver is busy, the packet is
- * re-enqueued.
- *
- * Packet Sent counter is decremented on successful
- * transmission.
- *===========================================================*/
-
-
-static void wanpipe_delayed_transmit (unsigned long data)
-{
- struct sock *sk=(struct sock *)data;
- struct sk_buff *skb;
- wanpipe_opt *wp = wp_sk(sk);
- struct net_device *dev = wp->dev;
- sdla_t *card = (sdla_t*)wp->card;
-
- if (!card || !dev){
- clear_bit(0, &wp->timer);
- DBG_PRINTK(KERN_INFO "wansock: Transmit delay, no dev or card\n");
- return;
- }
-
- if (sk->sk_state != WANSOCK_CONNECTED || !sock_flag(sk, SOCK_ZAPPED)) {
- clear_bit(0, &wp->timer);
- DBG_PRINTK(KERN_INFO "wansock: Tx Timer, State not CONNECTED\n");
- return;
- }
-
- /* If driver is executing command, we must offload
- * the board by not sending data. Otherwise a
- * pending command will never get a free buffer
- * to execute */
- if (atomic_read(&card->u.x.command_busy)){
- wp->tx_timer.expires = jiffies + SLOW_BACKOFF;
- add_timer(&wp->tx_timer);
- DBG_PRINTK(KERN_INFO "wansock: Tx Timer, command bys BACKOFF\n");
- return;
- }
-
-
- if (test_and_set_bit(0,&wanpipe_tx_critical)){
- printk(KERN_INFO "WanSock: Tx timer critical %s\n",dev->name);
- wp->tx_timer.expires = jiffies + SLOW_BACKOFF;
- add_timer(&wp->tx_timer);
- return;
- }
-
- /* Check for a packet in the fifo and send */
- if ((skb = skb_dequeue(&sk->sk_write_queue)) != NULL){
-
- if (dev->hard_start_xmit(skb, dev) != 0){
-
- /* Driver failed to transmit, re-enqueue
- * the packet and retry again later */
- skb_queue_head(&sk->sk_write_queue,skb);
- clear_bit(0,&wanpipe_tx_critical);
- return;
- }else{
-
- /* Packet Sent successful. Check for more packets
- * if more packets, re-trigger the transmit routine
- * other wise exit
- */
- atomic_dec(&wp->packet_sent);
-
- if (skb_peek(&sk->sk_write_queue) == NULL) {
- /* If there is nothing to send, kick
- * the poll routine, which will trigger
- * the application to send more data */
- sk->sk_data_ready(sk, 0);
- clear_bit(0, &wp->timer);
- }else{
- /* Reschedule as fast as possible */
- wp->tx_timer.expires = jiffies + 1;
- add_timer(&wp->tx_timer);
- }
- }
- }
- clear_bit(0,&wanpipe_tx_critical);
-}
-
-/*============================================================
- * execute_command
- *
- * Execute x25api commands. The atomic variable
- * chan->command is used to indicate to the driver that
- * command is pending for execution. The acutal command
- * structure is placed into a sock mbox structure
- * (wp_sk(sk)->mbox).
- *
- * The sock private structure, mbox is
- * used as shared memory between sock and the driver.
- * Driver uses the sock mbox to execute the command
- * and return the result.
- *
- * For all command except PLACE CALL, the function
- * waits for the result. PLACE CALL can be ether
- * blocking or nonblocking. The user sets this option
- * via ioctl call.
- *===========================================================*/
-
-
-static int execute_command(struct sock *sk, unsigned char cmd, unsigned int flags)
-{
- wanpipe_opt *wp = wp_sk(sk);
- struct net_device *dev;
- wanpipe_common_t *chan=NULL;
- int err=0;
- DECLARE_WAITQUEUE(wait, current);
-
- dev = dev_get_by_index(sk->sk_bound_dev_if);
- if (dev == NULL){
- printk(KERN_INFO "wansock: Exec failed no dev %i\n",
- sk->sk_bound_dev_if);
- return -ENODEV;
- }
- dev_put(dev);
-
- if ((chan=dev->priv) == NULL){
- printk(KERN_INFO "wansock: Exec cmd failed no priv area\n");
- return -ENODEV;
- }
-
- if (atomic_read(&chan->command)){
- printk(KERN_INFO "wansock: ERROR: Command already running %x, %s\n",
- atomic_read(&chan->command),dev->name);
- return -EINVAL;
- }
-
- if (!wp->mbox) {
- printk(KERN_INFO "wansock: In execute without MBOX\n");
- return -EINVAL;
- }
-
- ((mbox_cmd_t*)wp->mbox)->cmd.command = cmd;
- ((mbox_cmd_t*)wp->mbox)->cmd.lcn = wp->lcn;
- ((mbox_cmd_t*)wp->mbox)->cmd.result = 0x7F;
-
-
- if (flags & O_NONBLOCK){
- cmd |= 0x80;
- atomic_set(&chan->command, cmd);
- }else{
- atomic_set(&chan->command, cmd);
- }
-
- add_wait_queue(sk->sk_sleep,&wait);
- current->state = TASK_INTERRUPTIBLE;
- for (;;){
- if (((mbox_cmd_t*)wp->mbox)->cmd.result != 0x7F) {
- err = 0;
- break;
- }
- if (signal_pending(current)) {
- err = -ERESTARTSYS;
- break;
- }
- schedule();
- }
- current->state = TASK_RUNNING;
- remove_wait_queue(sk->sk_sleep,&wait);
-
- return err;
-}
-
-/*============================================================
- * wanpipe_destroy_timer
- *
- * Used by wanpipe_release, to delay release of
- * the socket.
- *===========================================================*/
-
-static void wanpipe_destroy_timer(unsigned long data)
-{
- struct sock *sk=(struct sock *)data;
- wanpipe_opt *wp = wp_sk(sk);
-
- if ((!atomic_read(&sk->sk_wmem_alloc) &&
- !atomic_read(&sk->sk_rmem_alloc)) ||
- (++wp->force == 5)) {
-
- if (atomic_read(&sk->sk_wmem_alloc) ||
- atomic_read(&sk->sk_rmem_alloc))
- printk(KERN_INFO "wansock: Warning, Packet Discarded due to sock shutdown!\n");
-
- kfree(wp);
- wp_sk(sk) = NULL;
-
- if (atomic_read(&sk->sk_refcnt) != 1) {
- atomic_set(&sk->sk_refcnt, 1);
- DBG_PRINTK(KERN_INFO "wansock: Error, wrong reference count: %i ! :delay.\n",
- atomic_read(&sk->sk_refcnt));
- }
- sock_put(sk);
- atomic_dec(&wanpipe_socks_nr);
- return;
- }
-
- sk->sk_timer.expires = jiffies + 5 * HZ;
- add_timer(&sk->sk_timer);
- printk(KERN_INFO "wansock: packet sk destroy delayed\n");
-}
-
-/*============================================================
- * wanpipe_unlink_driver
- *
- * When the socket is released, this function is
- * used to remove links that bind the sock and the
- * driver together.
- *===========================================================*/
-static void wanpipe_unlink_driver (struct sock *sk)
-{
- struct net_device *dev;
- wanpipe_common_t *chan=NULL;
-
- sock_reset_flag(sk, SOCK_ZAPPED);
- sk->sk_state = WANSOCK_DISCONNECTED;
- wp_sk(sk)->dev = NULL;
-
- dev = dev_get_by_index(sk->sk_bound_dev_if);
- if (!dev){
- printk(KERN_INFO "wansock: No dev on release\n");
- return;
- }
- dev_put(dev);
-
- if ((chan = dev->priv) == NULL){
- printk(KERN_INFO "wansock: No Priv Area on release\n");
- return;
- }
-
- set_bit(0,&chan->common_critical);
- chan->sk=NULL;
- chan->func=NULL;
- chan->mbox=NULL;
- chan->tx_timer=NULL;
- clear_bit(0,&chan->common_critical);
- release_device(dev);
-
- return;
-}
-
-/*============================================================
- * wanpipe_link_driver
- *
- * Upon successful bind(), sock is linked to a driver
- * by binding in the wanpipe_rcv() bottom half handler
- * to the driver function pointer, as well as sock and
- * sock mailbox addresses. This way driver can pass
- * data up the socket.
- *===========================================================*/
-
-static void wanpipe_link_driver(struct net_device *dev, struct sock *sk)
-{
- wanpipe_opt *wp = wp_sk(sk);
- wanpipe_common_t *chan = dev->priv;
- if (!chan)
- return;
- set_bit(0,&chan->common_critical);
- chan->sk=sk;
- chan->func=wanpipe_rcv;
- chan->mbox = wp->mbox;
- chan->tx_timer = &wp->tx_timer;
- wp->dev = dev;
- sock_set_flag(sk, SOCK_ZAPPED);
- clear_bit(0,&chan->common_critical);
-}
-
-
-/*============================================================
- * release_device
- *
- * During sock release, clear a critical bit, which
- * marks the device a being taken.
- *===========================================================*/
-
-
-static void release_device(struct net_device *dev)
-{
- wanpipe_common_t *chan=dev->priv;
- clear_bit(0,(void*)&chan->rw_bind);
-}
-
-/*============================================================
- * wanpipe_release
- *
- * Close a PACKET socket. This is fairly simple. We
- * immediately go to 'closed' state and remove our
- * protocol entry in the device list.
- *===========================================================*/
-
-static int wanpipe_release(struct socket *sock)
-{
- wanpipe_opt *wp;
- struct sock *sk = sock->sk;
-
- if (!sk)
- return 0;
-
- wp = wp_sk(sk);
- check_write_queue(sk);
-
- /* Kill the tx timer, if we don't kill it now, the timer
- * will run after we kill the sock. Timer code will
- * try to access the sock which has been killed and cause
- * kernel panic */
-
- del_timer(&wp->tx_timer);
-
- /*
- * Unhook packet receive handler.
- */
-
- if (wp->num == htons(X25_PROT) &&
- sk->sk_state != WANSOCK_DISCONNECTED && sock_flag(sk, SOCK_ZAPPED)) {
- struct net_device *dev = dev_get_by_index(sk->sk_bound_dev_if);
- wanpipe_common_t *chan;
- if (dev){
- chan=dev->priv;
- atomic_set(&chan->disconnect,1);
- DBG_PRINTK(KERN_INFO "wansock: Sending Clear Indication %i\n",
- sk->sk_state);
- dev_put(dev);
- }
- }
-
- set_bit(1,&wanpipe_tx_critical);
- write_lock(&wanpipe_sklist_lock);
- sk_del_node_init(sk);
- write_unlock(&wanpipe_sklist_lock);
- clear_bit(1,&wanpipe_tx_critical);
-
-
-
- release_driver(sk);
-
-
- /*
- * Now the socket is dead. No more input will appear.
- */
-
- sk->sk_state_change(sk); /* It is useless. Just for sanity. */
-
- sock->sk = NULL;
- sk->sk_socket = NULL;
- sock_set_flag(sk, SOCK_DEAD);
-
- /* Purge queues */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
- skb_queue_purge(&sk->sk_error_queue);
-
- if (atomic_read(&sk->sk_rmem_alloc) ||
- atomic_read(&sk->sk_wmem_alloc)) {
- del_timer(&sk->sk_timer);
- printk(KERN_INFO "wansock: Killing in Timer R %i , W %i\n",
- atomic_read(&sk->sk_rmem_alloc),
- atomic_read(&sk->sk_wmem_alloc));
- sk->sk_timer.data = (unsigned long)sk;
- sk->sk_timer.expires = jiffies + HZ;
- sk->sk_timer.function = wanpipe_destroy_timer;
- add_timer(&sk->sk_timer);
- return 0;
- }
-
- kfree(wp);
- wp_sk(sk) = NULL;
-
- if (atomic_read(&sk->sk_refcnt) != 1) {
- DBG_PRINTK(KERN_INFO "wansock: Error, wrong reference count: %i !:release.\n",
- atomic_read(&sk->sk_refcnt));
- atomic_set(&sk->sk_refcnt, 1);
- }
- sock_put(sk);
- atomic_dec(&wanpipe_socks_nr);
- return 0;
-}
-
-/*============================================================
- * check_write_queue
- *
- * During sock shutdown, if the sock state is
- * WANSOCK_CONNECTED and there is transmit data
- * pending. Wait until data is released
- * before proceeding.
- *===========================================================*/
-
-static void check_write_queue(struct sock *sk)
-{
-
- if (sk->sk_state != WANSOCK_CONNECTED)
- return;
-
- if (!atomic_read(&sk->sk_wmem_alloc))
- return;
-
- printk(KERN_INFO "wansock: MAJOR ERROR, Data lost on sock release !!!\n");
-
-}
-
-/*============================================================
- * release_driver
- *
- * This function is called during sock shutdown, to
- * release any resources and links that bind the sock
- * to the driver. It also changes the state of the
- * sock to WANSOCK_DISCONNECTED
- *===========================================================*/
-
-static void release_driver(struct sock *sk)
-{
- wanpipe_opt *wp;
- struct sk_buff *skb=NULL;
- struct sock *deadsk=NULL;
-
- if (sk->sk_state == WANSOCK_LISTEN ||
- sk->sk_state == WANSOCK_BIND_LISTEN) {
- while ((skb = skb_dequeue(&sk->sk_receive_queue)) != NULL) {
- if ((deadsk = get_newsk_from_skb(skb))){
- DBG_PRINTK (KERN_INFO "wansock: RELEASE: FOUND DEAD SOCK\n");
- sock_set_flag(deadsk, SOCK_DEAD);
- start_cleanup_timer(deadsk);
- }
- kfree_skb(skb);
- }
- if (sock_flag(sk, SOCK_ZAPPED))
- wanpipe_unlink_card(sk);
- }else{
- if (sock_flag(sk, SOCK_ZAPPED))
- wanpipe_unlink_driver(sk);
- }
- sk->sk_state = WANSOCK_DISCONNECTED;
- sk->sk_bound_dev_if = 0;
- sock_reset_flag(sk, SOCK_ZAPPED);
- wp = wp_sk(sk);
-
- if (wp) {
- kfree(wp->mbox);
- wp->mbox = NULL;
- }
-}
-
-/*============================================================
- * start_cleanup_timer
- *
- * If new incoming call's are pending but the socket
- * is being released, start the timer which will
- * envoke the kill routines for pending socks.
- *===========================================================*/
-
-
-static void start_cleanup_timer (struct sock *sk)
-{
- del_timer(&sk->sk_timer);
- sk->sk_timer.data = (unsigned long)sk;
- sk->sk_timer.expires = jiffies + HZ;
- sk->sk_timer.function = wanpipe_kill_sock_timer;
- add_timer(&sk->sk_timer);
-}
-
-
-/*============================================================
- * wanpipe_kill_sock
- *
- * This is a function which performs actual killing
- * of the sock. It releases socket resources,
- * and unlinks the sock from the driver.
- *===========================================================*/
-
-static void wanpipe_kill_sock_timer (unsigned long data)
-{
-
- struct sock *sk = (struct sock *)data;
- struct sock **skp;
-
- if (!sk)
- return;
-
- /* This function can be called from interrupt. We must use
- * appropriate locks */
-
- if (test_bit(1,&wanpipe_tx_critical)){
- sk->sk_timer.expires = jiffies + 10;
- add_timer(&sk->sk_timer);
- return;
- }
-
- write_lock(&wanpipe_sklist_lock);
- sk_del_node_init(sk);
- write_unlock(&wanpipe_sklist_lock);
-
-
- if (wp_sk(sk)->num == htons(X25_PROT) &&
- sk->sk_state != WANSOCK_DISCONNECTED) {
- struct net_device *dev = dev_get_by_index(sk->sk_bound_dev_if);
- wanpipe_common_t *chan;
- if (dev){
- chan=dev->priv;
- atomic_set(&chan->disconnect,1);
- dev_put(dev);
- }
- }
-
- release_driver(sk);
-
- sk->sk_socket = NULL;
-
- /* Purge queues */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
- skb_queue_purge(&sk->sk_error_queue);
-
- if (atomic_read(&sk->sk_rmem_alloc) ||
- atomic_read(&sk->sk_wmem_alloc)) {
- del_timer(&sk->sk_timer);
- printk(KERN_INFO "wansock: Killing SOCK in Timer\n");
- sk->sk_timer.data = (unsigned long)sk;
- sk->sk_timer.expires = jiffies + HZ;
- sk->sk_timer.function = wanpipe_destroy_timer;
- add_timer(&sk->sk_timer);
- return;
- }
-
- kfree(wp_sk(sk));
- wp_sk(sk) = NULL;
-
- if (atomic_read(&sk->sk_refcnt) != 1) {
- atomic_set(&sk->sk_refcnt, 1);
- DBG_PRINTK(KERN_INFO "wansock: Error, wrong reference count: %i ! :timer.\n",
- atomic_read(&sk->sk_refcnt));
- }
- sock_put(sk);
- atomic_dec(&wanpipe_socks_nr);
- return;
-}
-
-static void wanpipe_kill_sock_accept (struct sock *sk)
-{
-
- struct sock **skp;
-
- if (!sk)
- return;
-
- /* This function can be called from interrupt. We must use
- * appropriate locks */
-
- write_lock(&wanpipe_sklist_lock);
- sk_del_node_init(sk);
- write_unlock(&wanpipe_sklist_lock);
-
- sk->sk_socket = NULL;
-
-
- kfree(wp_sk(sk));
- wp_sk(sk) = NULL;
-
- if (atomic_read(&sk->sk_refcnt) != 1) {
- atomic_set(&sk->sk_refcnt, 1);
- DBG_PRINTK(KERN_INFO "wansock: Error, wrong reference count: %i ! :timer.\n",
- atomic_read(&sk->sk_refcnt));
- }
- sock_put(sk);
- atomic_dec(&wanpipe_socks_nr);
- return;
-}
-
-
-static void wanpipe_kill_sock_irq (struct sock *sk)
-{
-
- if (!sk)
- return;
-
- sk->sk_socket = NULL;
-
- kfree(wp_sk(sk));
- wp_sk(sk) = NULL;
-
- if (atomic_read(&sk->sk_refcnt) != 1) {
- atomic_set(&sk->sk_refcnt, 1);
- DBG_PRINTK(KERN_INFO "wansock: Error, wrong reference count: %i !:listen.\n",
- atomic_read(&sk->sk_refcnt));
- }
- sock_put(sk);
- atomic_dec(&wanpipe_socks_nr);
-}
-
-
-/*============================================================
- * wanpipe_do_bind
- *
- * Bottom half of the binding system call.
- * Once the wanpipe_bind() function checks the
- * legality of the call, this function binds the
- * sock to the driver.
- *===========================================================*/
-
-static int wanpipe_do_bind(struct sock *sk, struct net_device *dev,
- int protocol)
-{
- wanpipe_opt *wp = wp_sk(sk);
- wanpipe_common_t *chan=NULL;
- int err=0;
-
- if (sock_flag(sk, SOCK_ZAPPED)) {
- err = -EALREADY;
- goto bind_unlock_exit;
- }
-
- wp->num = protocol;
-
- if (protocol == 0){
- release_device(dev);
- err = -EINVAL;
- goto bind_unlock_exit;
- }
-
- if (dev) {
- if (dev->flags&IFF_UP) {
- chan=dev->priv;
- sk->sk_state = chan->state;
-
- if (wp->num == htons(X25_PROT) &&
- sk->sk_state != WANSOCK_DISCONNECTED &&
- sk->sk_state != WANSOCK_CONNECTING) {
- DBG_PRINTK(KERN_INFO
- "wansock: Binding to Device not DISCONNECTED %i\n",
- sk->sk_state);
- release_device(dev);
- err = -EAGAIN;
- goto bind_unlock_exit;
- }
-
- wanpipe_link_driver(dev,sk);
- sk->sk_bound_dev_if = dev->ifindex;
-
- /* X25 Specific option */
- if (wp->num == htons(X25_PROT))
- wp_sk(sk)->svc = chan->svc;
-
- } else {
- sk->sk_err = ENETDOWN;
- sk->sk_error_report(sk);
- release_device(dev);
- err = -EINVAL;
- }
- } else {
- err = -ENODEV;
- }
-bind_unlock_exit:
- /* FIXME where is this lock */
-
- return err;
-}
-
-/*============================================================
- * wanpipe_bind
- *
- * BIND() System call, which is bound to the AF_WANPIPE
- * operations structure. It checks for correct wanpipe
- * card name, and cross references interface names with
- * the card names. Thus, interface name must belong to
- * the actual card.
- *===========================================================*/
-
-
-static int wanpipe_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
-{
- struct wan_sockaddr_ll *sll = (struct wan_sockaddr_ll*)uaddr;
- struct sock *sk=sock->sk;
- wanpipe_opt *wp = wp_sk(sk);
- struct net_device *dev = NULL;
- sdla_t *card=NULL;
- char name[15];
-
- /*
- * Check legality
- */
-
- if (addr_len < sizeof(struct wan_sockaddr_ll)){
- printk(KERN_INFO "wansock: Address length error\n");
- return -EINVAL;
- }
- if (sll->sll_family != AF_WANPIPE){
- printk(KERN_INFO "wansock: Illegal family name specified.\n");
- return -EINVAL;
- }
-
- card = wanpipe_find_card (sll->sll_card);
- if (!card){
- printk(KERN_INFO "wansock: Wanpipe card not found: %s\n",sll->sll_card);
- return -ENODEV;
- }else{
- wp_sk(sk)->card = (void *)card;
- }
-
- if (!strcmp(sll->sll_device,"svc_listen")){
-
- /* Bind a sock to a card structure for listening
- */
- int err=0;
-
- /* This is x25 specific area if protocol doesn't
- * match, return error */
- if (sll->sll_protocol != htons(X25_PROT))
- return -EINVAL;
-
- err= wanpipe_link_card (sk);
- if (err < 0)
- return err;
-
- if (sll->sll_protocol)
- wp->num = sll->sll_protocol;
- sk->sk_state = WANSOCK_BIND_LISTEN;
- return 0;
-
- }else if (!strcmp(sll->sll_device,"svc_connect")){
-
- /* This is x25 specific area if protocol doesn't
- * match, return error */
- if (sll->sll_protocol != htons(X25_PROT))
- return -EINVAL;
-
- /* Find a free device
- */
- dev = wanpipe_find_free_dev(card);
- if (dev == NULL){
- DBG_PRINTK(KERN_INFO "wansock: No free network devices for card %s\n",
- card->devname);
- return -EINVAL;
- }
- }else{
- /* Bind a socket to a interface name
- * This is used by PVC mostly
- */
- strlcpy(name,sll->sll_device,sizeof(name));
- dev = dev_get_by_name(name);
- if (dev == NULL){
- printk(KERN_INFO "wansock: Failed to get Dev from name: %s,\n",
- name);
- return -ENODEV;
- }
-
- dev_put(dev);
-
- if (check_dev(dev, card)){
- printk(KERN_INFO "wansock: Device %s, doesn't belong to card %s\n",
- dev->name, card->devname);
- return -EINVAL;
- }
- if (get_atomic_device (dev))
- return -EINVAL;
- }
-
- return wanpipe_do_bind(sk, dev, sll->sll_protocol ? : wp->num);
-}
-
-/*============================================================
- * get_atomic_device
- *
- * Sets a bit atomically which indicates that
- * the interface is taken. This avoids race conditions.
- *===========================================================*/
-
-
-static inline int get_atomic_device(struct net_device *dev)
-{
- wanpipe_common_t *chan = dev->priv;
- if (!test_and_set_bit(0,(void *)&chan->rw_bind)){
- return 0;
- }
- return 1;
-}
-
-/*============================================================
- * check_dev
- *
- * Check that device name belongs to a particular card.
- *===========================================================*/
-
-static int check_dev(struct net_device *dev, sdla_t *card)
-{
- struct net_device* tmp_dev;
-
- for (tmp_dev = card->wandev.dev; tmp_dev;
- tmp_dev = *((struct net_device **)tmp_dev->priv)) {
- if (tmp_dev->ifindex == dev->ifindex){
- return 0;
- }
- }
- return 1;
-}
-
-/*============================================================
- * wanpipe_find_free_dev
- *
- * Find a free network interface. If found set atomic
- * bit indicating that the interface is taken.
- * X25API Specific.
- *===========================================================*/
-
-struct net_device *wanpipe_find_free_dev(sdla_t *card)
-{
- struct net_device* dev;
- volatile wanpipe_common_t *chan;
-
- if (test_and_set_bit(0,&find_free_critical)){
- printk(KERN_INFO "CRITICAL in Find Free\n");
- }
-
- for (dev = card->wandev.dev; dev;
- dev = *((struct net_device **)dev->priv)) {
- chan = dev->priv;
- if (!chan)
- continue;
- if (chan->usedby == API && chan->svc){
- if (!get_atomic_device (dev)){
- if (chan->state != WANSOCK_DISCONNECTED){
- release_device(dev);
- }else{
- clear_bit(0,&find_free_critical);
- return dev;
- }
- }
- }
- }
- clear_bit(0,&find_free_critical);
- return NULL;
-}
-
-/*============================================================
- * wanpipe_create
- *
- * SOCKET() System call. It allocates a sock structure
- * and adds the socket to the wanpipe_sk_list.
- * Crates AF_WANPIPE socket.
- *===========================================================*/
-
-static int wanpipe_create(struct socket *sock, int protocol)
-{
- struct sock *sk;
-
- //FIXME: This checks for root user, SECURITY ?
- //if (!capable(CAP_NET_RAW))
- // return -EPERM;
-
- if (sock->type != SOCK_DGRAM && sock->type != SOCK_RAW)
- return -ESOCKTNOSUPPORT;
-
- sock->state = SS_UNCONNECTED;
-
- if ((sk = wanpipe_alloc_socket()) == NULL)
- return -ENOBUFS;
-
- sk->sk_reuse = 1;
- sock->ops = &wanpipe_ops;
- sock_init_data(sock,sk);
-
- sock_reset_flag(sk, SOCK_ZAPPED);
- sk->sk_family = PF_WANPIPE;
- wp_sk(sk)->num = protocol;
- sk->sk_state = WANSOCK_DISCONNECTED;
- sk->sk_ack_backlog = 0;
- sk->sk_bound_dev_if = 0;
-
- atomic_inc(&wanpipe_socks_nr);
-
- /* We must disable interrupts because the ISR
- * can also change the list */
- set_bit(1,&wanpipe_tx_critical);
- write_lock(&wanpipe_sklist_lock);
- sk_add_node(sk, &wanpipe_sklist);
- write_unlock(&wanpipe_sklist_lock);
- clear_bit(1,&wanpipe_tx_critical);
-
- return(0);
-}
-
-
-/*============================================================
- * wanpipe_recvmsg
- *
- * Pull a packet from our receive queue and hand it
- * to the user. If necessary we block.
- *===========================================================*/
-
-static int wanpipe_recvmsg(struct kiocb *iocb, struct socket *sock,
- struct msghdr *msg, int len, int flags)
-{
- struct sock *sk = sock->sk;
- struct sk_buff *skb;
- int copied, err=-ENOBUFS;
-
-
- /*
- * If the address length field is there to be filled in, we fill
- * it in now.
- */
-
- msg->msg_namelen = sizeof(struct wan_sockaddr_ll);
-
- /*
- * Call the generic datagram receiver. This handles all sorts
- * of horrible races and re-entrancy so we can forget about it
- * in the protocol layers.
- *
- * Now it will return ENETDOWN, if device have just gone down,
- * but then it will block.
- */
-
- if (flags & MSG_OOB){
- skb = skb_dequeue(&sk->sk_error_queue);
- }else{
- skb=skb_recv_datagram(sk,flags,1,&err);
- }
- /*
- * An error occurred so return it. Because skb_recv_datagram()
- * handles the blocking we don't see and worry about blocking
- * retries.
- */
-
- if(skb==NULL)
- goto out;
-
- /*
- * You lose any data beyond the buffer you gave. If it worries a
- * user program they can ask the device for its MTU anyway.
- */
-
- copied = skb->len;
- if (copied > len)
- {
- copied=len;
- msg->msg_flags|=MSG_TRUNC;
- }
-
- wanpipe_wakeup_driver(sk);
-
- /* We can't use skb_copy_datagram here */
- err = memcpy_toiovec(msg->msg_iov, skb->data, copied);
- if (err)
- goto out_free;
-
- sock_recv_timestamp(msg, sk, skb);
-
- if (msg->msg_name)
- memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
-
- /*
- * Free or return the buffer as appropriate. Again this
- * hides all the races and re-entrancy issues from us.
- */
- err = (flags&MSG_TRUNC) ? skb->len : copied;
-
-out_free:
- skb_free_datagram(sk, skb);
-out:
- return err;
-}
-
-
-/*============================================================
- * wanpipe_wakeup_driver
- *
- * If socket receive buffer is full and driver cannot
- * pass data up the sock, it sets a packet_block flag.
- * This function check that flag and if sock receive
- * queue has room it kicks the driver BH handler.
- *
- * This way, driver doesn't have to poll the sock
- * receive queue.
- *===========================================================*/
-
-static void wanpipe_wakeup_driver(struct sock *sk)
-{
- struct net_device *dev = NULL;
- wanpipe_common_t *chan=NULL;
-
- dev = dev_get_by_index(sk->sk_bound_dev_if);
- if (!dev)
- return;
-
- dev_put(dev);
-
- if ((chan = dev->priv) == NULL)
- return;
-
- if (atomic_read(&chan->receive_block)){
- if (atomic_read(&sk->sk_rmem_alloc) <
- ((unsigned)sk->sk_rcvbuf * 0.9)) {
- printk(KERN_INFO "wansock: Queuing task for wanpipe\n");
- atomic_set(&chan->receive_block,0);
- wanpipe_queue_tq(&chan->wanpipe_task);
- wanpipe_mark_bh();
- }
- }
-}
-
-/*============================================================
- * wanpipe_getname
- *
- * I don't know what to do with this yet.
- * User can use this function to get sock address
- * information. Not very useful for Sangoma's purposes.
- *===========================================================*/
-
-
-static int wanpipe_getname(struct socket *sock, struct sockaddr *uaddr,
- int *uaddr_len, int peer)
-{
- struct net_device *dev;
- struct sock *sk = sock->sk;
- struct wan_sockaddr_ll *sll = (struct wan_sockaddr_ll*)uaddr;
-
- sll->sll_family = AF_WANPIPE;
- sll->sll_ifindex = sk->sk_bound_dev_if;
- sll->sll_protocol = wp_sk(sk)->num;
- dev = dev_get_by_index(sk->sk_bound_dev_if);
- if (dev) {
- sll->sll_hatype = dev->type;
- sll->sll_halen = dev->addr_len;
- memcpy(sll->sll_addr, dev->dev_addr, dev->addr_len);
- } else {
- sll->sll_hatype = 0; /* Bad: we have no ARPHRD_UNSPEC */
- sll->sll_halen = 0;
- }
- *uaddr_len = sizeof(*sll);
-
- dev_put(dev);
-
- return 0;
-}
-
-/*============================================================
- * wanpipe_notifier
- *
- * If driver turns off network interface, this function
- * will be envoked. Currently I treate it as a
- * call disconnect. More thought should go into this
- * function.
- *
- * FIXME: More thought should go into this function.
- *
- *===========================================================*/
-
-static int wanpipe_notifier(struct notifier_block *this, unsigned long msg, void *data)
-{
- struct sock *sk;
- hlist_node *node;
- struct net_device *dev = (struct net_device *)data;
-
- sk_for_each(sk, node, &wanpipe_sklist) {
- struct wanpipe_opt *po = wp_sk(sk);
-
- if (!po)
- continue;
- if (dev == NULL)
- continue;
-
- switch (msg) {
- case NETDEV_DOWN:
- case NETDEV_UNREGISTER:
- if (dev->ifindex == sk->sk_bound_dev_if) {
- printk(KERN_INFO "wansock: Device down %s\n",dev->name);
- if (sock_flag(sk, SOCK_ZAPPED)) {
- wanpipe_unlink_driver(sk);
- sk->sk_err = ENETDOWN;
- sk->sk_error_report(sk);
- }
-
- if (msg == NETDEV_UNREGISTER) {
- printk(KERN_INFO "wansock: Unregistering Device: %s\n",
- dev->name);
- wanpipe_unlink_driver(sk);
- sk->sk_bound_dev_if = 0;
- }
- }
- break;
- case NETDEV_UP:
- if (dev->ifindex == sk->sk_bound_dev_if &&
- po->num && !sock_flag(sk, SOCK_ZAPPED)) {
- printk(KERN_INFO "wansock: Registering Device: %s\n",
- dev->name);
- wanpipe_link_driver(dev,sk);
- }
- break;
- }
- }
- return NOTIFY_DONE;
-}
-
-/*============================================================
- * wanpipe_ioctl
- *
- * Execute a user commands, and set socket options.
- *
- * FIXME: More thought should go into this function.
- *
- *===========================================================*/
-
-static int wanpipe_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
-{
- struct sock *sk = sock->sk;
- int err;
-
- switch(cmd)
- {
- case SIOCGSTAMP:
- return sock_get_timestamp(sk, (struct timeval __user *)arg);
-
- case SIOC_WANPIPE_CHECK_TX:
-
- return atomic_read(&sk->sk_wmem_alloc);
-
- case SIOC_WANPIPE_SOCK_STATE:
-
- if (sk->sk_state == WANSOCK_CONNECTED)
- return 0;
-
- return 1;
-
-
- case SIOC_WANPIPE_GET_CALL_DATA:
-
- return get_ioctl_cmd (sk,(void*)arg);
-
- case SIOC_WANPIPE_SET_CALL_DATA:
-
- return set_ioctl_cmd (sk,(void*)arg);
-
- case SIOC_WANPIPE_ACCEPT_CALL:
- case SIOC_WANPIPE_CLEAR_CALL:
- case SIOC_WANPIPE_RESET_CALL:
-
- if ((err=set_ioctl_cmd(sk,(void*)arg)) < 0)
- return err;
-
- err=wanpipe_exec_cmd(sk,cmd,0);
- get_ioctl_cmd(sk,(void*)arg);
- return err;
-
- case SIOC_WANPIPE_DEBUG:
-
- return wanpipe_debug(sk,(void*)arg);
-
- case SIOC_WANPIPE_SET_NONBLOCK:
-
- if (sk->sk_state != WANSOCK_DISCONNECTED)
- return -EINVAL;
-
- sock->file->f_flags |= O_NONBLOCK;
- return 0;
-
-#ifdef CONFIG_INET
- case SIOCADDRT:
- case SIOCDELRT:
- case SIOCDARP:
- case SIOCGARP:
- case SIOCSARP:
- case SIOCDRARP:
- case SIOCGRARP:
- case SIOCSRARP:
- case SIOCGIFADDR:
- case SIOCSIFADDR:
- case SIOCGIFBRDADDR:
- case SIOCSIFBRDADDR:
- case SIOCGIFNETMASK:
- case SIOCSIFNETMASK:
- case SIOCGIFDSTADDR:
- case SIOCSIFDSTADDR:
- case SIOCSIFFLAGS:
- return inet_dgram_ops.ioctl(sock, cmd, arg);
-#endif
-
- default:
- return -ENOIOCTLCMD;
- }
- /*NOTREACHED*/
-}
-
-/*============================================================
- * wanpipe_debug
- *
- * This function will pass up information about all
- * active sockets.
- *
- * FIXME: More thought should go into this function.
- *
- *===========================================================*/
-
-static int wanpipe_debug (struct sock *origsk, void *arg)
-{
- struct sock *sk;
- struct hlist_node *node;
- struct net_device *dev = NULL;
- wanpipe_common_t *chan=NULL;
- int cnt=0, err=0;
- wan_debug_t *dbg_data = (wan_debug_t *)arg;
-
- sk_for_each(sk, node, &wanpipe_sklist) {
- wanpipe_opt *wp = wp_sk(sk);
-
- if (sk == origsk){
- continue;
- }
-
- if ((err=put_user(1, &dbg_data->debug[cnt].free)))
- return err;
- if ((err = put_user(sk->sk_state,
- &dbg_data->debug[cnt].state_sk)))
- return err;
- if ((err = put_user(sk->sk_rcvbuf,
- &dbg_data->debug[cnt].rcvbuf)))
- return err;
- if ((err = put_user(atomic_read(&sk->sk_rmem_alloc),
- &dbg_data->debug[cnt].rmem)))
- return err;
- if ((err = put_user(atomic_read(&sk->sk_wmem_alloc),
- &dbg_data->debug[cnt].wmem)))
- return err;
- if ((err = put_user(sk->sk_sndbuf,
- &dbg_data->debug[cnt].sndbuf)))
- return err;
- if ((err=put_user(sk_count, &dbg_data->debug[cnt].sk_count)))
- return err;
- if ((err=put_user(wp->poll_cnt, &dbg_data->debug[cnt].poll_cnt)))
- return err;
- if ((err = put_user(sk->sk_bound_dev_if,
- &dbg_data->debug[cnt].bound)))
- return err;
-
- if (sk->sk_bound_dev_if) {
- dev = dev_get_by_index(sk->sk_bound_dev_if);
- if (!dev)
- continue;
-
- chan=dev->priv;
- dev_put(dev);
-
- if ((err=put_user(chan->state, &dbg_data->debug[cnt].d_state)))
- return err;
- if ((err=put_user(chan->svc, &dbg_data->debug[cnt].svc)))
- return err;
-
- if ((err=put_user(atomic_read(&chan->command),
- &dbg_data->debug[cnt].command)))
- return err;
-
-
- if (wp){
- sdla_t *card = (sdla_t*)wp->card;
-
- if (card){
- if ((err=put_user(atomic_read(&card->u.x.command_busy),
- &dbg_data->debug[cnt].cmd_busy)))
- return err;
- }
-
- if ((err=put_user(wp->lcn,
- &dbg_data->debug[cnt].lcn)))
- return err;
-
- if (wp->mbox) {
- if ((err=put_user(1, &dbg_data->debug[cnt].mbox)))
- return err;
- }
- }
-
- if ((err=put_user(atomic_read(&chan->receive_block),
- &dbg_data->debug[cnt].rblock)))
- return err;
-
- if (copy_to_user(dbg_data->debug[cnt].name, dev->name, strlen(dev->name)))
- return -EFAULT;
- }
-
- if (++cnt == MAX_NUM_DEBUG)
- break;
- }
- return 0;
-}
-
-/*============================================================
- * get_ioctl_cmd
- *
- * Pass up the contents of socket MBOX to the user.
- *===========================================================*/
-
-static int get_ioctl_cmd (struct sock *sk, void *arg)
-{
- x25api_t *usr_data = (x25api_t *)arg;
- mbox_cmd_t *mbox_ptr;
- int err;
-
- if (usr_data == NULL)
- return -EINVAL;
-
- if (!wp_sk(sk)->mbox) {
- return -EINVAL;
- }
-
- mbox_ptr = (mbox_cmd_t *)wp_sk(sk)->mbox;
-
- if ((err=put_user(mbox_ptr->cmd.qdm, &usr_data->hdr.qdm)))
- return err;
- if ((err=put_user(mbox_ptr->cmd.cause, &usr_data->hdr.cause)))
- return err;
- if ((err=put_user(mbox_ptr->cmd.diagn, &usr_data->hdr.diagn)))
- return err;
- if ((err=put_user(mbox_ptr->cmd.length, &usr_data->hdr.length)))
- return err;
- if ((err=put_user(mbox_ptr->cmd.result, &usr_data->hdr.result)))
- return err;
- if ((err=put_user(mbox_ptr->cmd.lcn, &usr_data->hdr.lcn)))
- return err;
-
- if (mbox_ptr->cmd.length > 0){
- if (mbox_ptr->cmd.length > X25_MAX_DATA)
- return -EINVAL;
-
- if (copy_to_user(usr_data->data, mbox_ptr->data, mbox_ptr->cmd.length)){
- printk(KERN_INFO "wansock: Copy failed !!!\n");
- return -EFAULT;
- }
- }
- return 0;
-}
-
-/*============================================================
- * set_ioctl_cmd
- *
- * Before command can be execute, socket MBOX must
- * be created, and initialized with user data.
- *===========================================================*/
-
-static int set_ioctl_cmd (struct sock *sk, void *arg)
-{
- x25api_t *usr_data = (x25api_t *)arg;
- mbox_cmd_t *mbox_ptr;
- int err;
-
- if (!wp_sk(sk)->mbox) {
- void *mbox_ptr;
- struct net_device *dev = dev_get_by_index(sk->sk_bound_dev_if);
- if (!dev)
- return -ENODEV;
-
- dev_put(dev);
-
- if ((mbox_ptr = kzalloc(sizeof(mbox_cmd_t), GFP_ATOMIC)) == NULL)
- return -ENOMEM;
-
- wp_sk(sk)->mbox = mbox_ptr;
-
- wanpipe_link_driver(dev,sk);
- }
-
- mbox_ptr = (mbox_cmd_t*)wp_sk(sk)->mbox;
- memset(mbox_ptr, 0, sizeof(mbox_cmd_t));
-
- if (usr_data == NULL){
- return 0;
- }
- if ((err=get_user(mbox_ptr->cmd.qdm, &usr_data->hdr.qdm)))
- return err;
- if ((err=get_user(mbox_ptr->cmd.cause, &usr_data->hdr.cause)))
- return err;
- if ((err=get_user(mbox_ptr->cmd.diagn, &usr_data->hdr.diagn)))
- return err;
- if ((err=get_user(mbox_ptr->cmd.length, &usr_data->hdr.length)))
- return err;
- if ((err=get_user(mbox_ptr->cmd.result, &usr_data->hdr.result)))
- return err;
-
- if (mbox_ptr->cmd.length > 0){
- if (mbox_ptr->cmd.length > X25_MAX_DATA)
- return -EINVAL;
-
- if (copy_from_user(mbox_ptr->data, usr_data->data, mbox_ptr->cmd.length)){
- printk(KERN_INFO "Copy failed\n");
- return -EFAULT;
- }
- }
- return 0;
-}
-
-
-/*======================================================================
- * wanpipe_poll
- *
- * Datagram poll: Again totally generic. This also handles
- * sequenced packet sockets providing the socket receive queue
- * is only ever holding data ready to receive.
- *
- * Note: when you _don't_ use this routine for this protocol,
- * and you use a different write policy from sock_writeable()
- * then please supply your own write_space callback.
- *=====================================================================*/
-
-unsigned int wanpipe_poll(struct file * file, struct socket *sock, poll_table *wait)
-{
- struct sock *sk = sock->sk;
- unsigned int mask;
-
- ++wp_sk(sk)->poll_cnt;
-
- poll_wait(file, sk->sk_sleep, wait);
- mask = 0;
-
- /* exceptional events? */
- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) {
- mask |= POLLPRI;
- return mask;
- }
- if (sk->sk_shutdown & RCV_SHUTDOWN)
- mask |= POLLHUP;
-
- /* readable? */
- if (!skb_queue_empty(&sk->sk_receive_queue)) {
- mask |= POLLIN | POLLRDNORM;
- }
-
- /* connection hasn't started yet */
- if (sk->sk_state == WANSOCK_CONNECTING) {
- return mask;
- }
-
- if (sk->sk_state == WANSOCK_DISCONNECTED) {
- mask = POLLPRI;
- return mask;
- }
-
- /* This check blocks the user process if there is
- * a packet already queued in the socket write queue.
- * This option is only for X25API protocol, for other
- * protocol like chdlc enable streaming mode,
- * where multiple packets can be pending in the socket
- * transmit queue */
-
- if (wp_sk(sk)->num == htons(X25_PROT)) {
- if (atomic_read(&wp_sk(sk)->packet_sent))
- return mask;
- }
-
- /* writable? */
- if (sock_writeable(sk)){
- mask |= POLLOUT | POLLWRNORM | POLLWRBAND;
- }else{
- set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags);
- }
-
- return mask;
-}
-
-/*======================================================================
- * wanpipe_listen
- *
- * X25API Specific function. Set a socket into LISTENING MODE.
- *=====================================================================*/
-
-
-static int wanpipe_listen(struct socket *sock, int backlog)
-{
- struct sock *sk = sock->sk;
-
- /* This is x25 specific area if protocol doesn't
- * match, return error */
- if (wp_sk(sk)->num != htons(X25_PROT))
- return -EINVAL;
-
- if (sk->sk_state == WANSOCK_BIND_LISTEN) {
-
- sk->sk_max_ack_backlog = backlog;
- sk->sk_state = WANSOCK_LISTEN;
- return 0;
- }else{
- printk(KERN_INFO "wansock: Listening sock was not binded\n");
- }
-
- return -EINVAL;
-}
-
-/*======================================================================
- * wanpipe_link_card
- *
- * Connects the listening socket to the driver
- *=====================================================================*/
-
-static int wanpipe_link_card (struct sock *sk)
-{
- sdla_t *card = (sdla_t*)wp_sk(sk)->card;
-
- if (!card)
- return -ENOMEM;
-
- if ((card->sk != NULL) || (card->func != NULL)){
- printk(KERN_INFO "wansock: Listening queue is already established\n");
- return -EINVAL;
- }
-
- card->sk=sk;
- card->func=wanpipe_listen_rcv;
- sock_set_flag(sk, SOCK_ZAPPED);
-
- return 0;
-}
-
-/*======================================================================
- * wanpipe_listen
- *
- * X25API Specific function. Disconnect listening socket from
- * the driver.
- *=====================================================================*/
-
-static void wanpipe_unlink_card (struct sock *sk)
-{
- sdla_t *card = (sdla_t*)wp_sk(sk)->card;
-
- if (card){
- card->sk=NULL;
- card->func=NULL;
- }
-}
-
-/*======================================================================
- * wanpipe_exec_cmd
- *
- * Ioctl function calls this function to execute user command.
- * Connect() sytem call also calls this function to execute
- * place call. This function blocks until command is executed.
- *=====================================================================*/
-
-static int wanpipe_exec_cmd(struct sock *sk, int cmd, unsigned int flags)
-{
- int err = -EINVAL;
- wanpipe_opt *wp = wp_sk(sk);
- mbox_cmd_t *mbox_ptr = (mbox_cmd_t*)wp->mbox;
-
- if (!mbox_ptr){
- printk(KERN_INFO "NO MBOX PTR !!!!!\n");
- return -EINVAL;
- }
-
- /* This is x25 specific area if protocol doesn't
- * match, return error */
- if (wp->num != htons(X25_PROT))
- return -EINVAL;
-
-
- switch (cmd){
-
- case SIOC_WANPIPE_ACCEPT_CALL:
-
- if (sk->sk_state != WANSOCK_CONNECTING) {
- err = -EHOSTDOWN;
- break;
- }
-
- err = execute_command(sk,X25_ACCEPT_CALL,0);
- if (err < 0)
- break;
-
- /* Update. Mar6 2000.
- * Do not set the sock lcn number here, since
- * it is done in wanpipe_listen_rcv().
- */
- if (sk->sk_state == WANSOCK_CONNECTED) {
- wp->lcn = ((mbox_cmd_t*)wp->mbox)->cmd.lcn;
- DBG_PRINTK(KERN_INFO "\nwansock: Accept OK %i\n",
- wp->lcn);
- err = 0;
-
- }else{
- DBG_PRINTK (KERN_INFO "\nwansock: Accept Failed %i\n",
- wp->lcn);
- wp->lcn = 0;
- err = -ECONNREFUSED;
- }
- break;
-
- case SIOC_WANPIPE_CLEAR_CALL:
-
- if (sk->sk_state == WANSOCK_DISCONNECTED) {
- err = -EINVAL;
- break;
- }
-
-
- /* Check if data buffers are pending for transmission,
- * if so, check whether user wants to wait until data
- * is transmitted, or clear a call and drop packets */
-
- if (atomic_read(&sk->sk_wmem_alloc) ||
- check_driver_busy(sk)) {
- mbox_cmd_t *mbox = wp->mbox;
- if (mbox->cmd.qdm & 0x80){
- mbox->cmd.result = 0x35;
- err = -EAGAIN;
- break;
- }
- }
-
- sk->sk_state = WANSOCK_DISCONNECTING;
-
- err = execute_command(sk,X25_CLEAR_CALL,0);
- if (err < 0)
- break;
-
- err = -ECONNREFUSED;
- if (sk->sk_state == WANSOCK_DISCONNECTED) {
- DBG_PRINTK(KERN_INFO "\nwansock: CLEAR OK %i\n",
- wp->lcn);
- wp->lcn = 0;
- err = 0;
- }
- break;
-
- case SIOC_WANPIPE_RESET_CALL:
-
- if (sk->sk_state != WANSOCK_CONNECTED) {
- err = -EINVAL;
- break;
- }
-
-
- /* Check if data buffers are pending for transmission,
- * if so, check whether user wants to wait until data
- * is transmitted, or reset a call and drop packets */
-
- if (atomic_read(&sk->sk_wmem_alloc) ||
- check_driver_busy(sk)) {
- mbox_cmd_t *mbox = wp->mbox;
- if (mbox->cmd.qdm & 0x80){
- mbox->cmd.result = 0x35;
- err = -EAGAIN;
- break;
- }
- }
-
-
- err = execute_command(sk, X25_RESET,0);
- if (err < 0)
- break;
-
- err = mbox_ptr->cmd.result;
- break;
-
-
- case X25_PLACE_CALL:
-
- err=execute_command(sk,X25_PLACE_CALL,flags);
- if (err < 0)
- break;
-
- if (sk->sk_state == WANSOCK_CONNECTED) {
-
- wp->lcn = ((mbox_cmd_t*)wp->mbox)->cmd.lcn;
-
- DBG_PRINTK(KERN_INFO "\nwansock: PLACE CALL OK %i\n",
- wp->lcn);
- err = 0;
-
- } else if (sk->sk_state == WANSOCK_CONNECTING &&
- (flags & O_NONBLOCK)) {
- wp->lcn = ((mbox_cmd_t*)wp->mbox)->cmd.lcn;
- DBG_PRINTK(KERN_INFO "\nwansock: Place Call OK: Waiting %i\n",
- wp->lcn);
-
- err = 0;
-
- }else{
- DBG_PRINTK(KERN_INFO "\nwansock: Place call Failed\n");
- err = -ECONNREFUSED;
- }
-
- break;
-
- default:
- return -EINVAL;
- }
-
- return err;
-}
-
-static int check_driver_busy (struct sock *sk)
-{
- struct net_device *dev = dev_get_by_index(sk->sk_bound_dev_if);
- wanpipe_common_t *chan;
-
- if (!dev)
- return 0;
-
- dev_put(dev);
-
- if ((chan=dev->priv) == NULL)
- return 0;
-
- return atomic_read(&chan->driver_busy);
-}
-
-
-/*======================================================================
- * wanpipe_accept
- *
- * ACCEPT() System call. X25API Specific function.
- * For each incoming call, create a new socket and
- * return it to the user.
- *=====================================================================*/
-
-static int wanpipe_accept(struct socket *sock, struct socket *newsock, int flags)
-{
- struct sock *sk;
- struct sock *newsk;
- struct sk_buff *skb;
- DECLARE_WAITQUEUE(wait, current);
- int err=0;
-
- if (newsock->sk != NULL){
- wanpipe_kill_sock_accept(newsock->sk);
- newsock->sk=NULL;
- }
-
- if ((sk = sock->sk) == NULL)
- return -EINVAL;
-
- if (sk->sk_type != SOCK_RAW)
- return -EOPNOTSUPP;
-
- if (sk->sk_state != WANSOCK_LISTEN)
- return -EINVAL;
-
- if (wp_sk(sk)->num != htons(X25_PROT))
- return -EINVAL;
-
- add_wait_queue(sk->sk_sleep,&wait);
- current->state = TASK_INTERRUPTIBLE;
- for (;;){
- skb = skb_dequeue(&sk->sk_receive_queue);
- if (skb){
- err=0;
- break;
- }
- if (signal_pending(current)) {
- err = -ERESTARTSYS;
- break;
- }
- schedule();
- }
- current->state = TASK_RUNNING;
- remove_wait_queue(sk->sk_sleep,&wait);
-
- if (err != 0)
- return err;
-
- newsk = get_newsk_from_skb(skb);
- if (!newsk){
- return -EINVAL;
- }
-
- set_bit(1,&wanpipe_tx_critical);
- write_lock(&wanpipe_sklist_lock);
- sk_add_node(newsk, &wanpipe_sklist);
- write_unlock(&wanpipe_sklist_lock);
- clear_bit(1,&wanpipe_tx_critical);
-
- newsk->sk_socket = newsock;
- newsk->sk_sleep = &newsock->wait;
-
- /* Now attach up the new socket */
- sk->sk_ack_backlog--;
- newsock->sk = newsk;
-
- kfree_skb(skb);
-
- DBG_PRINTK(KERN_INFO "\nwansock: ACCEPT Got LCN %i\n",
- wp_sk(newsk)->lcn);
- return 0;
-}
-
-/*======================================================================
- * get_newsk_from_skb
- *
- * Accept() uses this function to get the address of the new
- * socket structure.
- *=====================================================================*/
-
-struct sock * get_newsk_from_skb (struct sk_buff *skb)
-{
- struct net_device *dev = skb->dev;
- wanpipe_common_t *chan;
-
- if (!dev){
- return NULL;
- }
-
- if ((chan = dev->priv) == NULL){
- return NULL;
- }
-
- if (!chan->sk){
- return NULL;
- }
- return (struct sock *)chan->sk;
-}
-
-/*======================================================================
- * wanpipe_connect
- *
- * CONNECT() System Call. X25API specific function
- * Check the state of the sock, and execute PLACE_CALL command.
- * Connect can ether block or return without waiting for connection,
- * if specified by user.
- *=====================================================================*/
-
-static int wanpipe_connect(struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags)
-{
- struct sock *sk = sock->sk;
- struct wan_sockaddr_ll *addr = (struct wan_sockaddr_ll*)uaddr;
- struct net_device *dev;
- int err;
-
- if (wp_sk(sk)->num != htons(X25_PROT))
- return -EINVAL;
-
- if (sk->sk_state == WANSOCK_CONNECTED)
- return -EISCONN; /* No reconnect on a seqpacket socket */
-
- if (sk->sk_state != WAN_DISCONNECTED) {
- printk(KERN_INFO "wansock: Trying to connect on channel NON DISCONNECT\n");
- return -ECONNREFUSED;
- }
-
- sk->sk_state = WANSOCK_DISCONNECTED;
- sock->state = SS_UNCONNECTED;
-
- if (addr_len != sizeof(struct wan_sockaddr_ll))
- return -EINVAL;
-
- if (addr->sll_family != AF_WANPIPE)
- return -EINVAL;
-
- if ((dev = dev_get_by_index(sk->sk_bound_dev_if)) == NULL)
- return -ENETUNREACH;
-
- dev_put(dev);
-
- if (!sock_flag(sk, SOCK_ZAPPED)) /* Must bind first - autobinding does not work */
- return -EINVAL;
-
- sock->state = SS_CONNECTING;
- sk->sk_state = WANSOCK_CONNECTING;
-
- if (!wp_sk(sk)->mbox) {
- if (wp_sk (sk)->svc)
- return -EINVAL;
- else {
- int err;
- if ((err=set_ioctl_cmd(sk,NULL)) < 0)
- return err;
- }
- }
-
- if ((err=wanpipe_exec_cmd(sk, X25_PLACE_CALL,flags)) != 0){
- sock->state = SS_UNCONNECTED;
- sk->sk_state = WANSOCK_CONNECTED;
- return err;
- }
-
- if (sk->sk_state != WANSOCK_CONNECTED && (flags & O_NONBLOCK)) {
- return 0;
- }
-
- if (sk->sk_state != WANSOCK_CONNECTED) {
- sock->state = SS_UNCONNECTED;
- return -ECONNREFUSED;
- }
-
- sock->state = SS_CONNECTED;
- return 0;
-}
-
-const struct proto_ops wanpipe_ops = {
- .family = PF_WANPIPE,
- .owner = THIS_MODULE,
- .release = wanpipe_release,
- .bind = wanpipe_bind,
- .connect = wanpipe_connect,
- .socketpair = sock_no_socketpair,
- .accept = wanpipe_accept,
- .getname = wanpipe_getname,
- .poll = wanpipe_poll,
- .ioctl = wanpipe_ioctl,
- .listen = wanpipe_listen,
- .shutdown = sock_no_shutdown,
- .setsockopt = sock_no_setsockopt,
- .getsockopt = sock_no_getsockopt,
- .sendmsg = wanpipe_sendmsg,
- .recvmsg = wanpipe_recvmsg
-};
-
-static struct net_proto_family wanpipe_family_ops = {
- .family = PF_WANPIPE,
- .create = wanpipe_create,
- .owner = THIS_MODULE,
-};
-
-struct notifier_block wanpipe_netdev_notifier = {
- .notifier_call = wanpipe_notifier,
-};
-
-
-#ifdef MODULE
-void cleanup_module(void)
-{
- printk(KERN_INFO "wansock: Cleaning up \n");
- unregister_netdevice_notifier(&wanpipe_netdev_notifier);
- sock_unregister(PF_WANPIPE);
- proto_unregister(&wanpipe_proto);
-}
-
-int init_module(void)
-{
- int rc;
-
- printk(KERN_INFO "wansock: Registering Socket \n");
-
- rc = proto_register(&wanpipe_proto, 0);
- if (rc != 0)
- goto out;
-
- sock_register(&wanpipe_family_ops);
- register_netdevice_notifier(&wanpipe_netdev_notifier);
-out:
- return rc;
-}
-#endif
-MODULE_LICENSE("GPL");
-MODULE_ALIAS_NETPROTO(PF_WANPIPE);
short same_lci = 0;
int rc = 0;
- if ((rt = x25_get_route(dest_addr)) != NULL) {
+ if ((rt = x25_get_route(dest_addr)) == NULL)
+ goto out_no_route;
- if ((neigh_new = x25_get_neigh(rt->dev)) == NULL) {
- /* This shouldnt happen, if it occurs somehow
- * do something sensible
- */
- goto out_put_route;
- }
-
- /* Avoid a loop. This is the normal exit path for a
- * system with only one x.25 iface and default route
+ if ((neigh_new = x25_get_neigh(rt->dev)) == NULL) {
+ /* This shouldnt happen, if it occurs somehow
+ * do something sensible
*/
- if (rt->dev == from->dev) {
- goto out_put_nb;
- }
+ goto out_put_route;
+ }
- /* Remote end sending a call request on an already
- * established LCI? It shouldnt happen, just in case..
- */
- read_lock_bh(&x25_forward_list_lock);
- list_for_each(entry, &x25_forward_list) {
- x25_frwd = list_entry(entry, struct x25_forward, node);
- if (x25_frwd->lci == lci) {
- printk(KERN_WARNING "X.25: call request for lci which is already registered!, transmitting but not registering new pair\n");
- same_lci = 1;
- }
- }
- read_unlock_bh(&x25_forward_list_lock);
-
- /* Save the forwarding details for future traffic */
- if (!same_lci){
- if ((new_frwd = kmalloc(sizeof(struct x25_forward),
- GFP_ATOMIC)) == NULL){
- rc = -ENOMEM;
- goto out_put_nb;
- }
- new_frwd->lci = lci;
- new_frwd->dev1 = rt->dev;
- new_frwd->dev2 = from->dev;
- write_lock_bh(&x25_forward_list_lock);
- list_add(&new_frwd->node, &x25_forward_list);
- write_unlock_bh(&x25_forward_list_lock);
+ /* Avoid a loop. This is the normal exit path for a
+ * system with only one x.25 iface and default route
+ */
+ if (rt->dev == from->dev) {
+ goto out_put_nb;
+ }
+
+ /* Remote end sending a call request on an already
+ * established LCI? It shouldnt happen, just in case..
+ */
+ read_lock_bh(&x25_forward_list_lock);
+ list_for_each(entry, &x25_forward_list) {
+ x25_frwd = list_entry(entry, struct x25_forward, node);
+ if (x25_frwd->lci == lci) {
+ printk(KERN_WARNING "X.25: call request for lci which is already registered!, transmitting but not registering new pair\n");
+ same_lci = 1;
}
+ }
+ read_unlock_bh(&x25_forward_list_lock);
- /* Forward the call request */
- if ( (skbn = skb_clone(skb, GFP_ATOMIC)) == NULL){
+ /* Save the forwarding details for future traffic */
+ if (!same_lci){
+ if ((new_frwd = kmalloc(sizeof(struct x25_forward),
+ GFP_ATOMIC)) == NULL){
+ rc = -ENOMEM;
goto out_put_nb;
}
- x25_transmit_link(skbn, neigh_new);
- rc = 1;
+ new_frwd->lci = lci;
+ new_frwd->dev1 = rt->dev;
+ new_frwd->dev2 = from->dev;
+ write_lock_bh(&x25_forward_list_lock);
+ list_add(&new_frwd->node, &x25_forward_list);
+ write_unlock_bh(&x25_forward_list_lock);
}
+ /* Forward the call request */
+ if ( (skbn = skb_clone(skb, GFP_ATOMIC)) == NULL){
+ goto out_put_nb;
+ }
+ x25_transmit_link(skbn, neigh_new);
+ rc = 1;
+
out_put_nb:
x25_neigh_put(neigh_new);
out_put_route:
x25_route_put(rt);
+
+out_no_route:
return rc;
}
sizeof(struct in6_addr));
}
audit_log_format(audit_buf,
- " src=" NIP6_FMT "dst=" NIP6_FMT,
+ " src=" NIP6_FMT " dst=" NIP6_FMT,
NIP6(saddr6), NIP6(daddr6));
}
break;
x->props.mode != mode ||
x->props.family != family ||
x->km.state != XFRM_STATE_ACQ ||
- x->id.spi != 0)
+ x->id.spi != 0 ||
+ x->id.proto != proto)
continue;
switch (family) {
if (use_spi && x->km.seq) {
x1 = __xfrm_find_acq_byseq(x->km.seq);
- if (x1 && xfrm_addr_cmp(&x1->id.daddr, &x->id.daddr, family)) {
+ if (x1 && ((x1->id.proto != x->id.proto) ||
+ xfrm_addr_cmp(&x1->id.daddr, &x->id.daddr, family))) {
xfrm_state_put(x1);
x1 = NULL;
}
return 0;
diff = x->replay.seq - seq;
- if (diff >= x->props.replay_window) {
+ if (diff >= min_t(unsigned int, x->props.replay_window,
+ sizeof(x->replay.bitmap) * 8)) {
x->stats.replay_window++;
return -EINVAL;
}
}
-static inline int xfrm_user_sec_ctx_size(struct xfrm_policy *xp)
+static inline int xfrm_user_sec_ctx_size(struct xfrm_sec_ctx *xfrm_ctx)
{
- struct xfrm_sec_ctx *xfrm_ctx = xp->security;
int len = 0;
if (xfrm_ctx) {
return -1;
}
-static int inline xfrm_sa_len(struct xfrm_state *x)
+static inline int xfrm_sa_len(struct xfrm_state *x)
{
int l = 0;
if (x->aalg)
len = RTA_SPACE(sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr);
len += NLMSG_SPACE(sizeof(struct xfrm_user_acquire));
- len += RTA_SPACE(xfrm_user_sec_ctx_size(xp));
+ len += RTA_SPACE(xfrm_user_sec_ctx_size(x->security));
#ifdef CONFIG_XFRM_SUB_POLICY
len += RTA_SPACE(sizeof(struct xfrm_userpolicy_type));
#endif
len = RTA_SPACE(sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr);
len += NLMSG_SPACE(sizeof(struct xfrm_user_polexpire));
- len += RTA_SPACE(xfrm_user_sec_ctx_size(xp));
+ len += RTA_SPACE(xfrm_user_sec_ctx_size(xp->security));
#ifdef CONFIG_XFRM_SUB_POLICY
len += RTA_SPACE(sizeof(struct xfrm_userpolicy_type));
#endif
* the dependency on linux/autoconf.h by a dependency on every config
* option which is mentioned in any of the listed prequisites.
*
- * To be exact, split-include populates a tree in include/config/,
- * e.g. include/config/his/driver.h, which contains the #define/#undef
- * for the CONFIG_HIS_DRIVER option.
+ * kconfig populates a tree in include/config/ with an empty file
+ * for each config symbol and when the configuration is updated
+ * the files representing changed config options are touched
+ * which then let make pick up the changes and the files that use
+ * the config symbols are rebuilt.
*
* So if the user changes his CONFIG_HIS_DRIVER option, only the objects
* which depend on "include/linux/config/his/driver.h" will be rebuilt,
continue;
found:
+ if (!memcmp(q - 7, "_MODULE", 7))
+ q -= 7;
use_config(p+7, q-p-7);
}
}
return;
}
-static int dummy_getprocattr(struct task_struct *p, char *name, void *value, size_t size)
+static int dummy_getprocattr(struct task_struct *p, char *name, char **value)
{
return -EINVAL;
}
}
static int selinux_getprocattr(struct task_struct *p,
- char *name, void *value, size_t size)
+ char *name, char **value)
{
struct task_security_struct *tsec;
u32 sid;
int error;
+ unsigned len;
if (current != p) {
error = task_has_perm(current, p, PROCESS__GETATTR);
if (!sid)
return 0;
- return selinux_getsecurity(sid, value, size);
+ error = security_sid_to_context(sid, value, &len);
+ if (error)
+ return error;
+ return len;
}
static int selinux_setprocattr(struct task_struct *p,
.ioctl = sq_ioctl,
.open = sq_open,
.release = sq_release,
+};
+
#ifdef HAS_RECORD
- .read = NULL /* default to no read for compat mode */
-#endif
+static const struct file_operations sq_fops_record =
+{
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .write = sq_write,
+ .poll = sq_poll,
+ .ioctl = sq_ioctl,
+ .open = sq_open,
+ .release = sq_release,
+ .read = sq_read,
};
+#endif
static int sq_init(void)
{
+ const struct file_operations *fops = &sq_fops;
#ifndef MODULE
int sq_unit;
#endif
#ifdef HAS_RECORD
if (dmasound.mach.record)
- sq_fops.read = sq_read ;
+ fops = &sq_fops_record;
#endif
- sq_unit = register_sound_dsp(&sq_fops, -1);
+ sq_unit = register_sound_dsp(fops, -1);
if (sq_unit < 0) {
printk(KERN_ERR "dmasound_core: couldn't register fops\n") ;
return sq_unit ;
static void ad1888_update_jacks(struct snd_ac97 *ac97)
{
unsigned short val = 0;
- if (! is_shared_linein(ac97))
+ /* clear LODIS if shared jack is to be used for Surround out */
+ if (is_shared_linein(ac97))
val |= (1 << 12);
- if (! is_shared_micin(ac97))
+ /* clear CLDIS if shared jack is to be used for C/LFE out */
+ if (is_shared_micin(ac97))
val |= (1 << 11);
/* shared Line-In */
snd_ac97_update_bits(ac97, AC97_AD_MISC, (1 << 11) | (1 << 12), val);
static void ad1985_update_jacks(struct snd_ac97 *ac97)
{
ad1888_update_jacks(ac97);
+ /* clear OMS if shared jack is to be used for C/LFE out */
snd_ac97_update_bits(ac97, AC97_AD_SERIAL_CFG, 1 << 9,
- is_shared_micin(ac97) ? 0 : 1 << 9);
+ is_shared_micin(ac97) ? 1 << 9 : 0);
}
static int patch_ad1985_specific(struct snd_ac97 *ac97)
unsigned short ser_val;
/* disable SURROUND and CENTER/LFE if not surround mode */
- if (! is_surround_on(ac97))
+ if (!is_surround_on(ac97))
misc_val |= AC97_AD1986_SODIS;
- if (! is_clfe_on(ac97))
+ if (!is_clfe_on(ac97))
misc_val |= AC97_AD1986_CLDIS;
/* select line input (default=LINE_IN, SURROUND or MIC_1/2) */
/* STATESTS int mask: SD2,SD1,SD0 */
#define STATESTS_INT_MASK 0x07
-#define AZX_MAX_CODECS 3
/* SD_CTL bits */
#define SD_CTL_STREAM_RESET 0x01 /* stream reset bit */
* Codec initialization
*/
+static unsigned int azx_max_codecs[] __devinitdata = {
+ [AZX_DRIVER_ICH] = 3,
+ [AZX_DRIVER_ATI] = 4,
+ [AZX_DRIVER_ATIHDMI] = 4,
+ [AZX_DRIVER_VIA] = 3, /* FIXME: correct? */
+ [AZX_DRIVER_SIS] = 3, /* FIXME: correct? */
+ [AZX_DRIVER_ULI] = 3, /* FIXME: correct? */
+ [AZX_DRIVER_NVIDIA] = 3, /* FIXME: correct? */
+};
+
static int __devinit azx_codec_create(struct azx *chip, const char *model)
{
struct hda_bus_template bus_temp;
return err;
codecs = 0;
- for (c = 0; c < AZX_MAX_CODECS; c++) {
+ for (c = 0; c < azx_max_codecs[chip->driver_type]; c++) {
if ((chip->codec_mask & (1 << c)) & probe_mask) {
err = snd_hda_codec_new(chip->bus, c, NULL);
if (err < 0)
runtime->hw.rates = hinfo->rates;
snd_pcm_limit_hw_rates(runtime);
snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS);
+ snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_BUFFER_BYTES,
+ 128);
+ snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES,
+ 128);
if ((err = hinfo->ops.open(hinfo, apcm->codec, substream)) < 0) {
azx_release_device(azx_dev);
mutex_unlock(&chip->open_mutex);
SND_PCI_QUIRK(0x1043, 0x81b3, "ASUS P5", AD1986A_3STACK),
SND_PCI_QUIRK(0x1043, 0x81cb, "ASUS M2N", AD1986A_3STACK),
SND_PCI_QUIRK(0x1043, 0x8234, "ASUS M2N", AD1986A_3STACK),
+ SND_PCI_QUIRK(0x144d, 0xb03c, "Samsung R55", AD1986A_3STACK),
SND_PCI_QUIRK(0x144d, 0xc01e, "FSC V2060", AD1986A_LAPTOP),
SND_PCI_QUIRK(0x144d, 0xc023, "Samsung X60", AD1986A_LAPTOP_EAPD),
SND_PCI_QUIRK(0x144d, 0xc024, "Samsung R65", AD1986A_LAPTOP_EAPD),
SND_PCI_QUIRK(0x144d, 0xc026, "Samsung X11", AD1986A_LAPTOP_EAPD),
SND_PCI_QUIRK(0x144d, 0xc504, "Samsung Q35", AD1986A_3STACK),
SND_PCI_QUIRK(0x144d, 0xc027, "Samsung Q1", AD1986A_ULTRA),
+ SND_PCI_QUIRK(0x17aa, 0x1011, "Lenovo M55", AD1986A_LAPTOP),
SND_PCI_QUIRK(0x17aa, 0x1017, "Lenovo A60", AD1986A_3STACK),
SND_PCI_QUIRK(0x17aa, 0x2066, "Lenovo N100", AD1986A_LAPTOP_EAPD),
SND_PCI_QUIRK(0x17c0, 0x2017, "Samsung M50", AD1986A_LAPTOP),
/*
* Patch for HP nx6320
*
- * nx6320 uses EAPD in the reserve way - EAPD-on means the internal
+ * nx6320 uses EAPD in the reverse way - EAPD-on means the internal
* speaker output enabled _and_ mute-LED off.
*/
return 0;
}
+/* configuration for Toshiba Laptops */
+static struct hda_verb ad1981_toshiba_init_verbs[] = {
+ {0x05, AC_VERB_SET_EAPD_BTLENABLE, 0x01 }, /* default on */
+ /* pin sensing on HP and Mic jacks */
+ {0x06, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | AD1981_HP_EVENT},
+ {0x08, AC_VERB_SET_UNSOLICITED_ENABLE, AC_USRSP_EN | AD1981_MIC_EVENT},
+ {}
+};
+
+static struct snd_kcontrol_new ad1981_toshiba_mixers[] = {
+ HDA_CODEC_VOLUME("Amp Volume", 0x1a, 0x0, HDA_OUTPUT),
+ HDA_CODEC_MUTE("Amp Switch", 0x1a, 0x0, HDA_OUTPUT),
+ { }
+};
+
/* configuration for Lenovo Thinkpad T60 */
static struct snd_kcontrol_new ad1981_thinkpad_mixers[] = {
HDA_CODEC_VOLUME("Master Playback Volume", 0x05, 0x0, HDA_OUTPUT),
AD1981_BASIC,
AD1981_HP,
AD1981_THINKPAD,
+ AD1981_TOSHIBA,
AD1981_MODELS
};
[AD1981_HP] = "hp",
[AD1981_THINKPAD] = "thinkpad",
[AD1981_BASIC] = "basic",
+ [AD1981_TOSHIBA] = "toshiba"
};
static struct snd_pci_quirk ad1981_cfg_tbl[] = {
/* Lenovo Thinkpad T60/X60/Z6xx */
SND_PCI_QUIRK(0x17aa, 0, "Lenovo Thinkpad", AD1981_THINKPAD),
SND_PCI_QUIRK(0x1014, 0x0597, "Lenovo Z60", AD1981_THINKPAD),
+ SND_PCI_QUIRK(0x1179, 0x0001, "Toshiba U205", AD1981_TOSHIBA),
{}
};
spec->mixers[0] = ad1981_thinkpad_mixers;
spec->input_mux = &ad1981_thinkpad_capture_source;
break;
+ case AD1981_TOSHIBA:
+ spec->mixers[0] = ad1981_hp_mixers;
+ spec->mixers[1] = ad1981_toshiba_mixers;
+ spec->num_init_verbs = 2;
+ spec->init_verbs[1] = ad1981_toshiba_init_verbs;
+ spec->multiout.dig_out_nid = 0;
+ spec->input_mux = &ad1981_hp_capture_source;
+ codec->patch_ops.init = ad1981_hp_init;
+ codec->patch_ops.unsol_event = ad1981_hp_unsol_event;
+ break;
}
-
return 0;
}
[AD1988_AUTO] = "auto",
};
+static struct snd_pci_quirk ad1988_cfg_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x81f6, "Asus M2N-SLI", AD1988_6STACK_DIG),
+ SND_PCI_QUIRK(0x1043, 0x81ec, "Asus P5B-DLX", AD1988_6STACK_DIG),
+ {}
+};
+
static int patch_ad1988(struct hda_codec *codec)
{
struct ad198x_spec *spec;
snd_printk(KERN_INFO "patch_analog: AD1988A rev.2 is detected, enable workarounds\n");
board_config = snd_hda_check_board_config(codec, AD1988_MODEL_LAST,
- ad1988_models, NULL);
+ ad1988_models, ad1988_cfg_tbl);
if (board_config < 0) {
printk(KERN_INFO "hda_codec: Unknown model for AD1988, trying auto-probe from BIOS...\n");
board_config = AD1988_AUTO;
static struct snd_pci_quirk alc260_cfg_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x007b, "Acer C20x", ALC260_ACER),
SND_PCI_QUIRK(0x1025, 0x008f, "Acer", ALC260_ACER),
+ SND_PCI_QUIRK(0x103c, 0x2808, "HP d5700", ALC260_HP_3013),
+ SND_PCI_QUIRK(0x103c, 0x280a, "HP d5750", ALC260_HP_3013),
SND_PCI_QUIRK(0x103c, 0x3010, "HP", ALC260_HP_3013),
SND_PCI_QUIRK(0x103c, 0x3011, "HP", ALC260_HP),
SND_PCI_QUIRK(0x103c, 0x3012, "HP", ALC260_HP_3013),
STAC_D945GTP5,
STAC_MACMINI,
STAC_MACBOOK,
- STAC_MACBOOK_PRO,
+ STAC_MACBOOK_PRO_V1,
+ STAC_MACBOOK_PRO_V2,
STAC_922X_MODELS
};
0x400000fc, 0x400000fb,
};
-static unsigned int macbook_pro_pin_configs[10] = {
+static unsigned int macbook_pro_v1_pin_configs[10] = {
+ 0x0321e230, 0x03a1e020, 0x9017e110, 0x01014010,
+ 0x01a19021, 0x0381e021, 0x1345e240, 0x13c5e22e,
+ 0x02a19320, 0x400000fb
+};
+
+static unsigned int macbook_pro_v2_pin_configs[10] = {
0x0221401f, 0x90a70120, 0x01813024, 0x01014010,
0x400000fd, 0x01016011, 0x1345e240, 0x13c5e22e,
0x400000fc, 0x400000fb,
[STAC_D945GTP5] = d945gtp5_pin_configs,
[STAC_MACMINI] = d945gtp5_pin_configs,
[STAC_MACBOOK] = macbook_pin_configs,
- [STAC_MACBOOK_PRO] = macbook_pro_pin_configs,
+ [STAC_MACBOOK_PRO_V1] = macbook_pro_v1_pin_configs,
+ [STAC_MACBOOK_PRO_V2] = macbook_pro_v2_pin_configs,
};
static const char *stac922x_models[STAC_922X_MODELS] = {
[STAC_D945GTP3] = "3stack",
[STAC_MACMINI] = "macmini",
[STAC_MACBOOK] = "macbook",
- [STAC_MACBOOK_PRO] = "macbook-pro",
+ [STAC_MACBOOK_PRO_V1] = "macbook-pro-v1",
+ [STAC_MACBOOK_PRO_V2] = "macbook-pro",
};
static struct snd_pci_quirk stac922x_cfg_tbl[] = {
for (i = 0; i < cfg->hp_outs; i++)
enable_pin_detect(codec, cfg->hp_pins[i],
STAC_HP_EVENT);
+ /* force to enable the first line-out; the others are set up
+ * in unsol_event
+ */
+ stac92xx_auto_set_pinctl(codec, spec->autocfg.line_out_pins[0],
+ AC_PINCTL_OUT_EN);
stac92xx_auto_init_hp_out(codec);
/* fake event to set up pins */
codec->patch_ops.unsol_event(codec, STAC_HP_EVENT << 26);
/* Intel Macs have all same PCI SSID, so we need to check
* codec SSID to distinguish the exact models
*/
+ printk(KERN_INFO "hda_codec: STAC922x, Apple subsys_id=%x\n", codec->subsystem_id);
switch (codec->subsystem_id) {
- case 0x106b1e00:
- spec->board_config = STAC_MACBOOK_PRO;
+ case 0x106b0200: /* MacBook Pro first generation */
+ spec->board_config = STAC_MACBOOK_PRO_V1;
+ break;
+ case 0x106b1e00: /* MacBook Pro second generation */
+ spec->board_config = STAC_MACBOOK_PRO_V2;
break;
}
}
static inline void snd_intel8x0_update(struct intel8x0 *chip, struct ichdev *ichdev)
{
unsigned long port = ichdev->reg_offset;
+ unsigned long flags;
int status, civ, i, step;
int ack = 0;
- spin_lock(&chip->reg_lock);
+ spin_lock_irqsave(&chip->reg_lock, flags);
status = igetbyte(chip, port + ichdev->roff_sr);
civ = igetbyte(chip, port + ICH_REG_OFF_CIV);
if (!(status & ICH_BCIS)) {
ack = 1;
}
}
- spin_unlock(&chip->reg_lock);
+ spin_unlock_irqrestore(&chip->reg_lock, flags);
if (ack && ichdev->substream) {
snd_pcm_period_elapsed(ichdev->substream);
}
}
pci_disable_device(pci);
pci_save_state(pci);
- pci_set_power_state(pci, pci_choose_state(pci, state));
+ /* The call below may disable built-in speaker on some laptops
+ * after S2RAM. So, don't touch it.
+ */
+ /* pci_set_power_state(pci, pci_choose_state(pci, state)); */
return 0;
}
config SND_SOC
tristate "SoC audio support"
+ depends on SND
+ select SND_PCM
---help---
If you want SoC support, you should say Y here and also to the
config SND_AT91_SOC
tristate "SoC Audio for the Atmel AT91 System-on-Chip"
- depends on ARCH_AT91 && SND
- select SND_PCM
+ depends on ARCH_AT91 && SND_SOC
help
Say Y or M if you want to add support for codecs attached to
the AT91 SSC interface. You will also need
config SND_PXA2XX_SOC
tristate "SoC Audio for the Intel PXA2xx chip"
- depends on ARCH_PXA && SND
- select SND_PCM
+ depends on ARCH_PXA && SND_SOC
help
Say Y or M if you want to add support for codecs attached to
the PXA2xx AC97, I2S or SSP interface. You will also need