Learning high bias in neural net

I have this simple model, which tries to predict constant $[1, 1, .. 1, 0, ..., 0]$ vector regardless of input. I found that model predicts it successfully if trained on input in $[0,10]$ range, however model's predictions are always $[0...0]$ vectors if model is trained on input in $[750, 770]$ range.

I was thinking model should converge to high bias weights and still be able to predict constant vector even for larger training inputs.

Maybe anyone can advice what am I doing wrong or how my model can be improvement?

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import random

batch_size = 1024
def random_generator():
    for i in range(100000):
        yield (
            # [random.randint(0,10) for i in range(60)], # Input in [0,10] range
            [random.randint(750,770) for i in range(60)], # Input in [750,770] range
            [1]*20+[0]*80)

def create_ds():
    ds = tf.data.Dataset.from_generator(random_generator,
                                        (tf.float32, tf.float32),
                                        ((60), (100)))
    ds = ds.batch(batch_size, drop_remainder=True)
    ds = ds.cache()
    return ds

inputs = keras.Input(shape=(60,), dtype=tf.float32, batch_size=batch_size)
net = layers.Dense(100, activation='relu')(inputs)
net = layers.Dense(100, activation='relu')(net)
model = keras.Model(inputs=inputs, outputs=net)

model.summary()

model.compile(loss=tf.keras.losses.MeanSquaredError(), optimizer=keras.optimizers.SGD(learning_rate=1e-3))
model.fit(create_ds(), epochs=3000)

for (x, y) in create_ds():
    predictions = model.predict(x)
    for p in predictions:
        print('predictions: ' + str(p))
    break
```

Topic bias keras tensorflow neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.