importImageimport numpy as npdef alpha_to_color(image, color=(255,255,255)):"""Set all fully transparent pixels of an RGBA image to the specified color.
This is a very simple solution that might leave over some ugly edges, due
to semi-transparent areas. You should use alpha_composite_with color instead.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
x = np.array(image)
r, g, b, a = np.rollaxis(x, axis=-1)
r[a ==0]= color[0]
g[a ==0]= color[1]
b[a ==0]= color[2]
x = np.dstack([r, g, b, a])returnImage.fromarray(x,'RGBA')def alpha_composite(front, back):"""Alpha composite two RGBA images.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
front -- PIL RGBA Image object
back -- PIL RGBA Image object
"""
front = np.asarray(front)
back = np.asarray(back)
result = np.empty(front.shape, dtype='float')
alpha = np.index_exp[:,:,3:]
rgb = np.index_exp[:,:,:3]
falpha = front[alpha]/255.0
balpha = back[alpha]/255.0
result[alpha]= falpha + balpha *(1- falpha)
old_setting = np.seterr(invalid='ignore')
result[rgb]=(front[rgb]* falpha + back[rgb]* balpha *(1- falpha))/ result[alpha]
np.seterr(**old_setting)
result[alpha]*=255
np.clip(result,0,255)# astype('uint8') maps np.nan and np.inf to 0
result = result.astype('uint8')
result =Image.fromarray(result,'RGBA')return resultdef alpha_composite_with_color(image, color=(255,255,255)):"""Alpha composite an RGBA image with a single color image of the
specified color and the same size as the original image.
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
back =Image.new('RGBA', size=image.size, color=color +(255,))return alpha_composite(image, back)def pure_pil_alpha_to_color_v1(image, color=(255,255,255)):"""Alpha composite an RGBA Image with a specified color.
NOTE: This version is much slower than the
alpha_composite_with_color solution. Use it only if
numpy is not available.
Source: http://stackoverflow.com/a/9168169/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""def blend_value(back, front, a):return(front * a + back *(255- a))/255def blend_rgba(back, front):
result =[blend_value(back[i], front[i], front[3])for i in(0,1,2)]return tuple(result +[255])
im = image.copy()# don't edit the reference directly
p = im.load()# load pixel arrayfor y in range(im.size[1]):for x in range(im.size[0]):
p[x, y]= blend_rgba(color +(255,), p[x, y])return imdef pure_pil_alpha_to_color_v2(image, color=(255,255,255)):"""Alpha composite an RGBA Image with a specified color.
Simpler, faster version than the solutions above.
Source: http://stackoverflow.com/a/9459208/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
image.load()# needed for split()
background =Image.new('RGB', image.size, color)
background.paste(image, mask=image.split()[3])# 3 is the alpha channelreturn background
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_to_color(i)"10 loops, best of 3:4.67 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_composite_with_color(i)"10 loops, best of 3:8.93 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color(i)"10 loops, best of 3:79.6 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color_v2(i)"10 loops, best of 3:1.1 msec per loop
Is there a way to fix this? I’d like to have white background where the transparent background used to be.
Solution
Thanks to the great answers, I’ve come up with the following function collection:
import Image
import numpy as np
def alpha_to_color(image, color=(255, 255, 255)):
"""Set all fully transparent pixels of an RGBA image to the specified color.
This is a very simple solution that might leave over some ugly edges, due
to semi-transparent areas. You should use alpha_composite_with color instead.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
x = np.array(image)
r, g, b, a = np.rollaxis(x, axis=-1)
r[a == 0] = color[0]
g[a == 0] = color[1]
b[a == 0] = color[2]
x = np.dstack([r, g, b, a])
return Image.fromarray(x, 'RGBA')
def alpha_composite(front, back):
"""Alpha composite two RGBA images.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
front -- PIL RGBA Image object
back -- PIL RGBA Image object
"""
front = np.asarray(front)
back = np.asarray(back)
result = np.empty(front.shape, dtype='float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
falpha = front[alpha] / 255.0
balpha = back[alpha] / 255.0
result[alpha] = falpha + balpha * (1 - falpha)
old_setting = np.seterr(invalid='ignore')
result[rgb] = (front[rgb] * falpha + back[rgb] * balpha * (1 - falpha)) / result[alpha]
np.seterr(**old_setting)
result[alpha] *= 255
np.clip(result, 0, 255)
# astype('uint8') maps np.nan and np.inf to 0
result = result.astype('uint8')
result = Image.fromarray(result, 'RGBA')
return result
def alpha_composite_with_color(image, color=(255, 255, 255)):
"""Alpha composite an RGBA image with a single color image of the
specified color and the same size as the original image.
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
back = Image.new('RGBA', size=image.size, color=color + (255,))
return alpha_composite(image, back)
def pure_pil_alpha_to_color_v1(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
NOTE: This version is much slower than the
alpha_composite_with_color solution. Use it only if
numpy is not available.
Source: http://stackoverflow.com/a/9168169/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
def blend_value(back, front, a):
return (front * a + back * (255 - a)) / 255
def blend_rgba(back, front):
result = [blend_value(back[i], front[i], front[3]) for i in (0, 1, 2)]
return tuple(result + [255])
im = image.copy() # don't edit the reference directly
p = im.load() # load pixel array
for y in range(im.size[1]):
for x in range(im.size[0]):
p[x, y] = blend_rgba(color + (255,), p[x, y])
return im
def pure_pil_alpha_to_color_v2(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
Simpler, faster version than the solutions above.
Source: http://stackoverflow.com/a/9459208/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
image.load() # needed for split()
background = Image.new('RGB', image.size, color)
background.paste(image, mask=image.split()[3]) # 3 is the alpha channel
return background
Performance
The simple non-compositing alpha_to_color function is the fastest solution, but leaves behind ugly borders because it does not handle semi transparent areas.
Both the pure PIL and the numpy compositing solutions give great results, but alpha_composite_with_color is much faster (8.93 msec) than pure_pil_alpha_to_color (79.6 msec). If numpy is available on your system, that’s the way to go. (Update: The new pure PIL version is the fastest of all mentioned solutions.)
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_to_color(i)"
10 loops, best of 3: 4.67 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_composite_with_color(i)"
10 loops, best of 3: 8.93 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color(i)"
10 loops, best of 3: 79.6 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color_v2(i)"
10 loops, best of 3: 1.1 msec per loop
Here’s a version that’s much simpler – not sure how performant it is. Heavily based on some django snippet I found while building RGBA -> JPG + BG support for sorl thumbnails.
from PIL import Image
png = Image.open(object.logo.path)
png.load() # required for png.split()
background = Image.new("RGB", png.size, (255, 255, 255))
background.paste(png, mask=png.split()[3]) # 3 is the alpha channel
background.save('foo.jpg', 'JPEG', quality=80)
Result @80%
Result @ 50%
回答 1
通过使用Image.alpha_composite,Yuji’Tomita’Tomita的解决方案变得更简单。tuple index out of range如果png没有alpha通道,则此代码可以避免错误。
By using Image.alpha_composite, the solution by Yuji ‘Tomita’ Tomita become simpler. This code can avoid a tuple index out of range error if png has no alpha channel.
Note that the logo also has some semi-transparent pixels used to smooth the edges around the words and icon. Saving to jpeg ignores the semi-transparency, making the resultant jpeg look quite jagged.
A better quality result could be made using imagemagick’s convert command:
convert logo.png -background white -flatten /tmp/out.jpg
To make a nicer quality blend using numpy, you could use alpha compositing:
import Image
import numpy as np
def alpha_composite(src, dst):
'''
Return the alpha composite of src and dst.
Parameters:
src -- PIL RGBA Image object
dst -- PIL RGBA Image object
The algorithm comes from http://en.wikipedia.org/wiki/Alpha_compositing
'''
# http://stackoverflow.com/a/3375291/190597
# http://stackoverflow.com/a/9166671/190597
src = np.asarray(src)
dst = np.asarray(dst)
out = np.empty(src.shape, dtype = 'float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
src_a = src[alpha]/255.0
dst_a = dst[alpha]/255.0
out[alpha] = src_a+dst_a*(1-src_a)
old_setting = np.seterr(invalid = 'ignore')
out[rgb] = (src[rgb]*src_a + dst[rgb]*dst_a*(1-src_a))/out[alpha]
np.seterr(**old_setting)
out[alpha] *= 255
np.clip(out,0,255)
# astype('uint8') maps np.nan (and np.inf) to 0
out = out.astype('uint8')
out = Image.fromarray(out, 'RGBA')
return out
FNAME = 'logo.png'
img = Image.open(FNAME).convert('RGBA')
white = Image.new('RGBA', size = img.size, color = (255, 255, 255, 255))
img = alpha_composite(img, white)
img.save('/tmp/out.jpg')
回答 3
这是纯PIL解决方案。
def blend_value(under, over, a):return(over*a + under*(255-a))/255def blend_rgba(under, over):return tuple([blend_value(under[i], over[i], over[3])for i in(0,1,2)]+[255])
white =(255,255,255,255)
im =Image.open(object.logo.path)
p = im.load()for y in range(im.size[1]):for x in range(im.size[0]):
p[x,y]= blend_rgba(white, p[x,y])
im.save('/tmp/output.png')
def blend_value(under, over, a):
return (over*a + under*(255-a)) / 255
def blend_rgba(under, over):
return tuple([blend_value(under[i], over[i], over[3]) for i in (0,1,2)] + [255])
white = (255, 255, 255, 255)
im = Image.open(object.logo.path)
p = im.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
p[x,y] = blend_rgba(white, p[x,y])
im.save('/tmp/output.png')
It’s not broken. It’s doing exactly what you told it to; those pixels are black with full transparency. You will need to iterate across all pixels and convert ones with full transparency to white.
回答 5
import numpy as np
import PIL
def convert_image(image_file):
image =Image.open(image_file)# this could be a 4D array PNG (RGBA)
original_width, original_height = image.size
np_image = np.array(image)
new_image = np.zeros((np_image.shape[0], np_image.shape[1],3))# create 3D arrayfor each_channel in range(3):
new_image[:,:,each_channel]= np_image[:,:,each_channel]# only copy first 3 channels.# flushing
np_image =[]return new_image
# Get the RGBA buffer from the figure
w,h = fig.canvas.get_width_height()
buf = np.fromstring ( fig.canvas.tostring_argb(), dtype=np.uint8 )
buf.shape =( w, h,4)# canvas.tostring_argb give pixmap in ARGB mode. Roll the ALPHA channel to have it in RGBA mode
buf = np.roll ( buf,3, axis =2)return buf
def fig2img ( fig ):
“””
@brief Convert a Matplotlib figure to a PIL Image in RGBA format and return it
@param fig a matplotlib figure
@return a Python Imaging Library ( PIL ) image
“””
# put the figure pixmap into a numpy array
buf = fig2data ( fig )
w, h, d = buf.shape
return Image.frombytes( “RGBA”, ( w ,h ), buf.tostring( ) )
def fig2data ( fig ):
“””
@brief Convert a Matplotlib figure to a 4D numpy array with RGBA channels and return it
@param fig a matplotlib figure
@return a numpy 3D array of RGBA values
“””
# draw the renderer
fig.canvas.draw ( )
# Get the RGBA buffer from the figure
w,h = fig.canvas.get_width_height()
buf = np.fromstring ( fig.canvas.tostring_argb(), dtype=np.uint8 )
buf.shape = ( w, h, 4 )
# canvas.tostring_argb give pixmap in ARGB mode. Roll the ALPHA channel to have it in RGBA mode
buf = np.roll ( buf, 3, axis = 2 )
return buf
def rgba2rgb(img, c=(0, 0, 0), path=’foo.jpg’, is_already_saved=False, if_load=True):
if not is_already_saved:
background = Image.new(“RGB”, img.size, c)
background.paste(img, mask=img.split()[3]) # 3 is the alpha channel
background.save(path, 'JPEG', quality=100)
is_already_saved = True
if if_load:
if is_already_saved:
im = Image.open(path)
return np.array(im)
else:
raise ValueError('No image to load.')
libjpeg-dev is required to be able to process jpegs with pillow (or PIL), so you need to install it and then recompile pillow. It also seems that libjpeg8-dev is needed on Ubuntu 14.04
If you’re still using PIL then you should really be using pillow these days though, so first pip uninstall PIL before following these instructions to switch, or if you have a good reason for sticking with PIL then replace “pillow” with “PIL” in the below).
On Ubuntu:
# install libjpeg-dev with apt
sudo apt-get install libjpeg-dev
# if you're on Ubuntu 14.04, also install this
sudo apt-get install libjpeg8-dev
# reinstall pillow
pip install --no-cache-dir -I pillow
If that doesn’t work, try one of the below, depending on whether you are on 64bit or 32bit Ubuntu.
(Edits to include feedback from comments. Thanks Charles Offenbacher for pointing out this differs for 32bit, and t-mart for suggesting use of --no-cache-dir).
when your see “– JPEG support avaliable” that means it works.
But, if it still doesn’t work when your edit your jpeg image, check the python path !!
my python path missed /usr/local/lib/python2.7/dist-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/, so I edit the ~/.bashrc add the following code to this file:
PIL SETUP SUMMARY
--------------------------------------------------------------------
version Pillow2.8.2
platform linux 3.4.3(default,May252015,15:44:26)[GCC 4.8.2]--------------------------------------------------------------------*** TKINTER support not available
--- JPEG support available
*** OPENJPEG (JPEG2000) support not available
--- ZLIB (PNG/ZIP) support available
--- LIBTIFF support available
--- FREETYPE2 support available
*** LITTLECMS2 support not available
*** WEBP support not available
*** WEBPMUX support not available
--------------------------------------------------------------------
Rolo’s answer is excellent, however I had to reinstall Pillow by bypassing pip cache (introduced with pip 7) otherwise it won’t get properly recompiled!!!
The command is:
pip install -I --no-cache-dir -v Pillow
and you can see if Pillow has been properly configured by reading in the logs this:
PIL SETUP SUMMARY
--------------------------------------------------------------------
version Pillow 2.8.2
platform linux 3.4.3 (default, May 25 2015, 15:44:26)
[GCC 4.8.2]
--------------------------------------------------------------------
*** TKINTER support not available
--- JPEG support available
*** OPENJPEG (JPEG2000) support not available
--- ZLIB (PNG/ZIP) support available
--- LIBTIFF support available
--- FREETYPE2 support available
*** LITTLECMS2 support not available
*** WEBP support not available
*** WEBPMUX support not available
--------------------------------------------------------------------
as you can see the support for jpg, tiff and so on is enabled, because I previously installed the required libraries via apt (libjpeg-dev libpng12-dev libfreetype6-dev libtiff-dev)
I was already using Pillow and got the same error.
Tried installing libjpeg or libjpeg-dev as suggested by others but was told that a (newer) version was already installed.
This question was posted quite a while ago and most of the answers are quite old too. So when I spent hours trying to figure this out, nothing worked, and I tried all suggestions in this post.
I was still getting the standard JPEG errors when trying to upload a JPG in my Django avatar form:
raise IOError("decoder %s not available" % decoder_name)
OSError: decoder jpeg not available
Then I checked the repository for Ubuntu 12.04 and noticed some extra packages for libjpeg. I installed these and my problem was solved:
sudo apt-get install libjpeg62 libjpeg62-dev
Installing these removed libjpeg-dev, libjpeg-turbo8-dev, and libjpeg8-dev.
Hope this helps someone in the year 2015 and beyond!
Cheers
回答 11
同样的问题在这里,JPEG support available但是仍然得到了IOError: decoder/encoder jpeg not available,除了我使用枕头而不是PIL。
Same problem here, JPEG support available but still got IOError: decoder/encoder jpeg not available, except I use Pillow and not PIL.
I tried all of the above and more, but after many hours I realized that using sudo pip install does not work as I expected, in combination with virtualenv. Silly me.
Using sudo effectively launches the command in a new shell (my understanding of this may not be entirely correct) where the virtualenv is not activated, meaning that the packages will be installed in the global environment instead. (This messed things up, I think I had 2 different installations of Pillow.)
I cleaned things up, changed user to root and reinstalled in the virtualenv and now it works.
Hopefully this will help someone!
First I had to delete the python folders in hidden folder user/appData (that was creating huge headaches), in addition to uninstalling Python. Then I installed WinPython Distribution: http://code.google.com/p/winpython/ which includes PIL
回答 14
对于Mac OS Mountain Lion上的用户,我遵循了zeantsoi的提示,但是它不起作用。