From: Larry Woodman <lwoodman@redhat.com> Date: Wed, 22 Sep 2010 19:47:35 -0400 Subject: [misc] amd_iommu: fix slab corruption with iommu enabled Message-id: <1285184855.31554.68.camel@dhcp-100-19-198.bos.redhat.com> Patchwork-id: 28342 O-Subject: [RHEL5 Patch] Fix slab corruption on amd boxes with iommu enabled Bugzilla: 530619 RH-Acked-by: David S. Miller <davem@redhat.com> RH-Acked-by: Jarod Wilson <jarod@redhat.com> RH-Acked-by: Andy Gospodarek <gospo@redhat.com> RH-Acked-by: Chris Wright <chrisw@redhat.com> Bugzilla #530619 When running the RHEL5 debug kernel on AMD boxes with the iommu enabled we were seeing slab corruption due to __unmap_single() in arch/x86_64/kernel/amd_iommu.c calling iommu_flush_pages() with the dma_addr that had been logically and'ed with the PAGE_MASK. This resulted in slab corruption in the debug kernel. Commit 899483e4b210023ed9d0b78296d90b2640565c40 From: Joerg Roedel <joerg.roedel@amd.com> Date: Mon, 20 Sep 2010 11:06:23 +0200 Subject: [PATCH] amd-iommu: Fix wrong io/tlb flush address in __unmap_single In the __unmap_single function the dma_addr is rounded down to the next lower page boundary for unmapping all requested pages. The problem it that with this operation the offset into the page is lost which let the cache-flushing code immagine it has to flush one page instead of two. This can lead to stale io/tlb entries and all kind of unpredictable behavior. In practice this bug resulted in slab corruption and io-page-faults. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Jarod Wilson <jarod@redhat.com> diff --git a/arch/x86_64/kernel/amd_iommu.c b/arch/x86_64/kernel/amd_iommu.c index 59da1f0..bd2b5c7 100644 --- a/arch/x86_64/kernel/amd_iommu.c +++ b/arch/x86_64/kernel/amd_iommu.c @@ -1327,6 +1327,7 @@ static void __unmap_single(struct amd_iommu *iommu, { dma_addr_t i, start; unsigned int pages; + dma_addr_t flush_addr = dma_addr; if ((dma_addr == bad_dma_address) || (dma_addr + size > dma_dom->aperture_size)) @@ -1346,7 +1347,7 @@ static void __unmap_single(struct amd_iommu *iommu, dma_ops_free_addresses(dma_dom, dma_addr, pages); if (amd_iommu_unmap_flush || dma_dom->need_flush) { - iommu_flush_pages(iommu, dma_dom->domain.id, dma_addr, size); + iommu_flush_pages(iommu, dma_dom->domain.id, flush_addr, size); dma_dom->need_flush = false; } }