  C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6 MSBUILD # Keras RGB to Grayscale  » development » Keras RGB to Grayscale

By : Nelly Quintero de Ru
Date : November 19 2020, 03:01 PM
With these it helps There are a few formulas to transform a color image into a grayscale image. They're very well determined, and the choice often depends on whether you'd like brighter or darker results, better contrast, etc.
Three common formulas are here. Let's take the "luminosity" formula. code :
``````result =  0.21 R + 0.72 G + 0.07 B
``````
``````def converter(x):

#x has shape (batch, width, height, channels)
return (0.21 * x[:,:,:,:1]) + (0.72 * x[:,:,:,1:2]) + (0.07 * x[:,:,:,-1:])
``````
``````Lambda(converter)
``````
``````Lambda(lambda x: tf.image.rgb_to_grayscale(x))
``````
``````#perhaps faster? perhaps slower?
def converter(x):

weights = K.constant([[[[0.21 , 0.72 , 0.07]]]])
return K.sum(x*weights, axis=-1,keepdims=True)
`````` ## Plotting the area of an object in grayscale against its intensity level in the grayscale image

By : David Boone
Date : March 29 2020, 07:55 AM
Hope this helps Basically what I am trying to produce is the histogram of the image at varying grayscale intensities showing me the area of the connected components in the image. , Based on your existing code:
code :
``````img = rgb2gray(imread('W1\Writer1_01_02.jpg'));
k = 1:-0.01:0.1;
bins = 1:100 % change depending on your image

% preallocate output - this will be filled with histograms
histout = zeros(length(k),length(bins);

for m = 1:length(k);
bw_normal = im2bw(img, k(m));
bw = imcomplement(bw_normal);
[label,n] = bwlabel(bw);
stats = regionprops(label,img, {'Area'});
A = cell2mat(struct2cell(stats));
histout(m,:) = hist(A,bins);
end
``````
``````imagesc(histout)
colormap('jet')
set(gca,'XTickLabel',bins(get(gca,'XTick')));
set(gca,'YTickLabel',k(get(gca,'YTick')));
xlabel('Area')
ylabel('Threshold')
`````` ## Is there a way to convert an image from grayscale to RGB in "pure" Keras

By : Daniel Jackson
Date : March 29 2020, 07:55 AM
fixed the issue. Will look into that further Maybe you would consider this "cheating" (as keras.backend may end up calling Tensorflow behind the scene), but here's a solution:
code :
``````from keras import backend as K

def grayscale_to_rgb(images, channel_axis=-1):
images= K.expand_dims(images, axis=channel_axis)
tiling =  * 4    # 4 dimensions: B, H, W, C
tiling[channel_axis] *= 3
images= K.tile(images, tiling)
return images
`````` ## Importing an image as grayscale and converting it to grayscale doesn't produce the same result when multiplying it by 25

By : user2904473
Date : March 29 2020, 07:55 AM
Hope that helps There are two reasons due to which the difference in results is being observed.
Difference in data-type Channel order
code :
``````gray_image = gray_image * 255.0
``````
``````[0.114, 0.587, 0.299]
``````
``````def func():

rgb_converted_to_gray_image=np.dot(rgb_image[...,:3], [0.114, 0.587, 0.299])

print("Before multiplying with 255")
print(gray_image)
print("------------")
print(rgb_converted_to_gray_image)

gray_image=gray_image*255.0
rgb_converted_to_gray_image=rgb_converted_to_gray_image*255

print("After multiplying with 255")
print(gray_image)
print("------------")
print(rgb_converted_to_gray_image)
`````` ## Is it possible to feed the pretrained Inception model (tensorflow 2.0/Keras) with 2D grayscale images?

By : Lourens Groenewald
Date : March 29 2020, 07:55 AM
like below fixes the issue According to Keras 2.0 documentation, in relation to the input shape of the images that can be fed to the pretrained inception model: , You can copy the grayscale image 3 times for a pseudoRGB image
code :
``````import numpy as np
# img=np.zeros((224,224))
``````
``````img = np.expand_dims(img,-1)
``````
``````img = np.repeat(img,3,2)
print(img.shape)
# (224,224,3)
`````` ## Keras: Image segmentation using grayscale masks and ImageDataGenerator class

By : settheorynoob
Date : March 29 2020, 07:55 AM
I hope this helps you . I am currently trying to implement a convolutional network using Keras 2.1.6 (with TensorFlow as backend) and its ImageDataGenerator to segment an image using a grayscale mask. I try to use an image as input, and a mask as label. Due to a low amount of training images, and memory constraints I utilize the ImageDataGenerator class provided in Keras. , Here is a better version of Unet, you can use this code
code :
``````def conv_block(tensor, nfilters, size=3, padding='same', initializer="he_normal"):
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
return x

def deconv_block(tensor, residual, nfilters, size=3, padding='same', strides=(2, 2)):
y = concatenate([y, residual], axis=3)
y = conv_block(y, nfilters)
return y

def Unet(img_height, img_width, nclasses=3, filters=64):
# down
input_layer = Input(shape=(img_height, img_width, 3), name='image_input')
conv1 = conv_block(input_layer, nfilters=filters)
conv1_out = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = conv_block(conv1_out, nfilters=filters*2)
conv2_out = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = conv_block(conv2_out, nfilters=filters*4)
conv3_out = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = conv_block(conv3_out, nfilters=filters*8)
conv4_out = MaxPooling2D(pool_size=(2, 2))(conv4)
conv4_out = Dropout(0.5)(conv4_out)
conv5 = conv_block(conv4_out, nfilters=filters*16)
conv5 = Dropout(0.5)(conv5)
# up
deconv6 = deconv_block(conv5, residual=conv4, nfilters=filters*8)
deconv6 = Dropout(0.5)(deconv6)
deconv7 = deconv_block(deconv6, residual=conv3, nfilters=filters*4)
deconv7 = Dropout(0.5)(deconv7)
deconv8 = deconv_block(deconv7, residual=conv2, nfilters=filters*2)
deconv9 = deconv_block(deconv8, residual=conv1, nfilters=filters)
# output
output_layer = Conv2D(filters=nclasses, kernel_size=(1, 1))(deconv9)
output_layer = BatchNormalization()(output_layer)
output_layer = Activation('softmax')(output_layer)

model = Model(inputs=input_layer, outputs=output_layer, name='Unet')
return model
``````
``````output_layer = Conv2D(filters=nclasses, kernel_size=(1, 1))(deconv9)
output_layer = BatchNormalization()(output_layer)
output_layer = Activation('softmax')(output_layer)
``````
``````output_layer = Conv2D(filters=2, kernel_size=(1, 1))(deconv9)
output_layer = BatchNormalization()(output_layer)
output_layer = Activation('sigmoid')(output_layer)
``````
``````# we create two instances with the same arguments
data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args)

# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
image_datagen.fit(images, augment=True, seed=seed)

image_generator = image_datagen.flow_from_directory(
'data/images',
class_mode=None,
seed=seed)

class_mode=None,
seed=seed)

# combine generators into one which yields image and masks

model.fit_generator(
train_generator,
steps_per_epoch=2000,
epochs=50)
``````
``````class seg_gen(Sequence):
def __init__(self, x_set, y_set, batch_size, image_dir, mask_dir):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
self.samples = len(self.x)
self.image_dir = image_dir

def __len__(self):
return int(np.ceil(len(self.x) / float(self.batch_size)))

def __getitem__(self, idx):
idx = np.random.randint(0, self.samples, batch_size)
batch_x, batch_y = [], []
drawn = 0
for i in idx:
batch_x.append(_image)
return np.array(batch_x), np.array(batch_y)
``````
``````unet = Unet(256, 256, nclasses=66, filters=64)
print(unet.output_shape)
p_unet = multi_gpu_model(unet, 4)
tb = TensorBoard(log_dir='logs', write_graph=True)
mc = ModelCheckpoint(mode='max', filepath='models-dr/top_weights.h5', monitor='acc', save_best_only='True', save_weights_only='True', verbose=1)
es = EarlyStopping(mode='max', monitor='acc', patience=6, verbose=1)
callbacks = [tb, mc, es]

p_unet.fit_generator(train_gen, steps_per_epoch=steps, epochs=13, callbacks=callbacks, workers=8)
``````
``````def dice_coeff(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
return score

def dice_loss(y_true, y_pred):
loss = 1 - dice_coeff(y_true, y_pred)
return loss
`````` 