0
Openshift configuracion inicial

Open 1 Respuestas 1 Views
Hola

Luego de luchar en el laburo me dieron un server para poder hacer pruebas y mi idea es instalar openshift origin en centos. Lo que nose son los prerequisitos. Si instalo una vm con centos:

Que espacio necesito?
Que puntos de montaje mecesito crear?

Segunn lo que lei tendria que crear tres nodos y cada uno con 1y gb. Pero el resto no lose. Agradeceria si alguien me podria decir como arrancar.

Saludos y gracias

1 Respuesta

0
si vas a arrancar de zero y no te import el HA o una distribucion de roles, mira minishift , es una vm allinone que te facilita la vida o levantalo como dockers usando el cliente binario y ejecutando "oc c
luster up".

ahora si tu idea es armar algo como entorno de dev, sin ha, pero con los roles distribuidos vas a necesitar:

1 vm con 8 gb de ram para el master
2 vms con 8 gb de ram para TUS containers
2 vms con 4 gb de ram para la infra (routers, registry, metricas).

mi recomendacion, arranca con minishift, luego leeete docs.openshift.org e instalate el entorno de dev
respondido por pablo halamaj Nov 24, 2017
10Comentarios
comentado por fmontaldo3 (190 puntos) Nov 24, 2017
Hola Pablo

Muchas gracias por responderme. Lo primero que voy a jacer es crear un dns master y uno slave por el tema de resolucion de nombres. Luego voy a seguir con la instalaciones y si podes seguir ayidandome te lo agradeceria. El finde empiezo en con el dna y pongo mis comentarios.

Saludos
comentado por pablohalamaj (950 puntos) Nov 24, 2017
Dale, consultame todo lo que necesites.

Si querés hacerlo facil en lugar del DNS podés usar tu archivo hosts   pero eso no es claramente escalable ;)

Saludos!
comentado por fmontaldo3 (190 puntos) Dic 1, 2017
Pablo,

finalmente instale openshift en una vm unicamente. Segui este tutorial ya que lo que vi en la pagina de openshift para mi es un chino.

http://erikkrogstad.com/installing-openshift-and-docker-with-centos-7-on-google-cloud-gcp-and-deploy-a-new-app/

¿me podrias ayduar a verificar si la isntalacion es correcta?. Te paso el estado del cluster.

[[email protected] openshift-origin-server-v3.7.0]# oadm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/opt/openshift-origin-server-v3.7.0/openshift.local.config/master/admin.kubeconfig'
Info:  Using context for cluster-admin access: 'myproject/127-0-0-1:8443/system:admin'

[Note] Running diagnostic: ConfigContexts[default/10-20-20-200:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

ERROR: [DCli0015 from diagnostic [email protected]/origin/pkg/diagnostics/client/config_contexts.go:285]
       For client config context 'default/10-20-20-200:8443/system:admin':
       The server URL is 'https://10.20.20.200:8443'
       The user authentication is 'system:admin/10-20-20-200:8443'
       The current project is 'default'
       (*url.Error) Get https://10.20.20.200:8443/apis/project.openshift.io/v1/projects: Forbidden
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.

[Note] Running diagnostic: ConfigContexts[default/127-0-0-1:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'default/127-0-0-1:8443/system:admin':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [default kube-public kube-system myproject openshift openshift-infra openshift-node]

[Note] Running diagnostic: ConfigContexts[myproject/127-0-0-1:8443/developer]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'myproject/127-0-0-1:8443/developer':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'developer/127-0-0-1:8443'
       The current project is 'myproject'
       Successfully requested project list; has access to project(s):
         [myproject]

[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint

ERROR: [DCli2012 from diagnostic [email protected]/origin/pkg/diagnostics/client/run_diagnostics_pod.go:178]
       See the errors below in the output from the diagnostic pod:
       [Note] Running diagnostic: PodCheckAuth
              Description: Check that service account credentials authenticate as expected

       WARN:  [DP1005 from diagnostic [email protected]/origin/pkg/diagnostics/pod/auth.go:87]
              A request to the master timed out.
              This could be temporary but could also indicate network or DNS problems.

       ERROR: [DP1014 from diagnostic [email protected]/origin/pkg/diagnostics/pod/auth.go:175]
              Request to integrated registry timed out; this typically indicates network or SDN problems.

       [Note] Running diagnostic: PodCheckDns
              Description: Check that DNS within a pod works as expected

       WARN:  [DP2014 from diagnostic [email protected]/origin/pkg/diagnostics/pod/dns.go:119]
              A request to the nameserver 172.30.0.1 timed out.
              This could be temporary but could also indicate network or DNS problems.

       [Note] Summary of diagnostics execution (version v3.7.0+7ed6862):
       [Note] Warnings seen: 2
       [Note] Errors seen: 1

[Note] Running diagnostic: NetworkCheck
       Description: Create a pod on all schedulable nodes and run network diagnostics from the application standpoint

Info:  Skipping network diagnostics check. Reason: Not using openshift network plugin.

[Note] Skipping diagnostic: AggregatedLogging
       Description: Check aggregated logging integration for proper configuration
       Because: No master config file was provided

[Note] Running diagnostic: ClusterRegistry
       Description: Check that there is a working Docker registry

[Note] Running diagnostic: ClusterRoleBindings
       Description: Check that the default ClusterRoleBindings are present and contain the expected subjects

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has more subjects than expected.

       Use the `oc adm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has extra subject {ServiceAccount  horizontal-pod-autoscaler openshift-infra}.

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has more subjects than expected.

       Use the `oc adm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has extra subject {ServiceAccount  horizontal-pod-autoscaler kube-system}.

Info:  clusterrolebinding/cluster-admin has more subjects than expected.

       Use the `oc adm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/cluster-admin has extra subject {ServiceAccount  pvinstaller default}.
Info:  clusterrolebinding/cluster-admin has extra subject {User rbac.authorization.k8s.io admin }.

[Note] Running diagnostic: ClusterRoles
       Description: Check that the default ClusterRoles are present and contain the expected permissions

[Note] Running diagnostic: ClusterRouterName
       Description: Check there is a working router

[Note] Running diagnostic: MasterNode
       Description: Check if master is also running node (for Open vSwitch)

WARN:  [DClu3004 from diagnostic [email protected]/origin/pkg/diagnostics/cluster/master_node.go:162]
       Unable to find a node matching the cluster server IP.
       This may indicate the master is not also running a node, and is unable
       to proxy to pods over the Open vSwitch SDN.

[Note] Skipping diagnostic: MetricsApiProxy
       Description: Check the integrated heapster metrics can be reached via the API proxy
       Because: The heapster service does not exist in the openshift-infra project at this time,
       so it is not available for the Horizontal Pod Autoscaler to use as a source of metrics.

[Note] Running diagnostic: NodeDefinitions
       Description: Check node records on master

[Note] Running diagnostic: RouteCertificateValidation
       Description: Check all route certificates for certificates that might be rejected by extended validation.

[Note] Skipping diagnostic: ServiceExternalIPs
       Description: Check for existing services with ExternalIPs that are disallowed by master config
       Because: No master config file was detected

[Note] Summary of diagnostics execution (version v3.7.0+7ed6862):
[Note] Warnings seen: 1
[Note] Errors seen: 2
[[email protected] openshift-origin-server-v3.7.0]#

Desde ya mucahs gracias.
comentado por pablohalamaj (950 puntos) Dic 3, 2017
Hola,

Por lo que veo hay 2 errores en el output que pasaste:

A) cuando intenta acceder a https://10.20.20.200:8443

B) cuando intenta acceder a la regsitry interna de Openshift

Parece ser tema de DNS ( Openshift es muyy dependiente  de que los DNS estén correctamente instalados ) ya que si accede a la consola mediante el 127.0.0.1 funciona.
Se me ocurren las siguientes pruebas para ver que puede estar pasando.

A) verificar como quedó la configuracion del /etc/resolv.conf en la VM  y ver que apunte a un DNS que resuelva correctamente la IP y el Hostname de la VM

B) Desde la VM hacer ping al hostname que le quedó configurado a la consola de openshift y ver que ip devuelve


El modo en que levanta Openshift en la pagina es medio complicado y muy manual, fijate si siguiendo estos pasos no te resulta mas simple

https://docs.openshift.org/latest/getting_started/administrators.html#downloading-the-binary

Nota: Antes de usar openshift, siempre tenes que tener libre los puertos 8443 y 53 y el servicio de Docker activo.

Avisame si algo de esto te funcionó.

Saludo!
comentado por fmontaldo3 (190 puntos) Dic 20, 2017
Hola Pablo,

Puedo instalar openshift mediante esta guia.

https://www.server-world.info/en/note?os=CentOS_7&p=openshift&f=1

Lo que no puedo hacer es confiurar la app de ejemplo que me da para empezar.

[[email protected] ~]$  oc new-project test-project
Now using project "test-project" on server "https://master:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.
[[email protected] ~]$

[[email protected] ~]$ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
--> Found Docker image 9eef88b (23 hours old) from Docker Hub for "centos/ruby-22-centos7"

    Ruby 2.2
    --------
    Ruby 2.2 available as docker container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible.

    Tags: builder, ruby, ruby22

    * An image stream will be created as "ruby-22-centos7:latest" that will track the source image
    * A source build using source code from https://github.com/openshift/ruby-ex.git will be created
      * The resulting image will be pushed to image stream "ruby-ex:latest"
      * Every time "ruby-22-centos7:latest" changes a new build will be triggered
    * This image will be deployed in deployment config "ruby-ex"
    * Port 8080/tcp will be load balanced by service "ruby-ex"
      * Other containers can access this service through the hostname "ruby-ex"

--> Creating resources ...
    imagestream "ruby-22-centos7" created
    imagestream "ruby-ex" created
    buildconfig "ruby-ex" created
    deploymentconfig "ruby-ex" created
    service "ruby-ex" created
--> Success
    Build scheduled, use 'oc logs -f bc/ruby-ex' to track its progress.
    Run 'oc status' to view your app.
[[email protected] ~]$

[[email protected] ~]$ oc status
In project test-project on server https://master:8443

svc/ruby-ex - 172.30.9.116:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest
      build #1 failed 18 seconds ago
    deployment #1 waiting on image or update

Errors:
  * build/ruby-ex-1 has failed.

1 error and 1 warning identified, use 'oc status -v' to see details.
[[email protected] ~]$

tag an application image from Docker Hub

[[email protected] ~]$ oc status -v
In project test-project on server https://master:8443

svc/ruby-ex - 172.30.9.116:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest
      build #1 failed about a minute ago
    deployment #1 waiting on image or update

Errors:
  * build/ruby-ex-1 has failed.
    try: Inspect the build failure with 'oc logs -f bc/ruby-ex'

Warnings:
  * The image trigger for dc/ruby-ex will have no effect until istag/ruby-ex:latest is imported or created by a build.

Info:
  * pod/ruby-ex-1-build has no liveness probe to verify pods are still running.
    try: oc set probe pod/ruby-ex-1-build --liveness ...
  * dc/ruby-ex has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/ruby-ex --readiness ...
  * dc/ruby-ex has no liveness probe to verify pods are still running.
    try: oc set probe dc/ruby-ex --liveness ...

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
[[email protected] ~]$


[[email protected] ~]$ oc logs -f bc/ruby-ex
Cloning "https://github.com/openshift/ruby-ex.git" ...
error: build error: fatal: unable to access 'https://github.com/openshift/ruby-ex.git/': Could not resolve host: github.com; Unknown error
[[email protected] ~]$



[[email protected] ~]$ oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
Tag deployment-example:latest set to openshift/deployment-example:v2.
[[email protected] ~]$

¿me podrias decir que pasos tengo que seguir para solucionar esto?.

Saludos y gracias
comentado por pablohalamaj (950 puntos) Dic 21, 2017
Me parece que tenés problemas de DNS.

Lo que te está pasando es que cuando querés construir tu ejemplo de Ruby, OpenShift levanta un Contenedor (BUILDER) para bajarse el codigo y construir la imagen docker que va a ser tu aplicacion.
Para eso intenta conectarse a github, y la configuración DNS dentro del BUILDER  es heredada de la configuración DNS del host donde ese docker se ejecuta.


Ojo que la version 3.6 de Openshift pinchaba la configuración de los DNS, porque cuando instala mete su propió software de DNS (dnsmasq cahceando entre tus DNS y el SkyDNS que tiene la data de los servicios de Openshift ).
Seguramente si mirás el /etc/resolv.conf de tus nodos la ip del nameserver es la ip propia del servidor, eso está OK porque asi DNSMASQ se mete en el medio, pero después la config de DNSMASQ tendria que decir que cuando pedís cosas de los dominios "cluster.local" (los servicios internos de Kubernetes) lo resuelve SkyDNS y cuando pedis otro dominio vaya a tus DNS de verdad (por ej: el 8.8.8.8 ).

Valida eso  fijandote como está la configuración en "/etc/dnsmasq.d/node-dnsmasq.conf"  y "/etc/resolv.conf"
comentado por fmontaldo3 (190 puntos) Dic 21, 2017
Hola Pablo,

Mañana a primera hora te paso la configuracion. Por el moneto me voy a fijar con un odm diagnostics para ver que esta fallando.
comentado por fmontaldo3 (190 puntos) Dic 26, 2017
Hola Pablo,

Te paso la configuracion que tengo.

[[email protected] ~]# cat /etc/resolv.conf
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
# Generated by NetworkManager
search dev.nps.com cluster.local
nameserver 10.32.8.201
[[email protected] ~]#

cada vez que hago un reboot toca la config este script

/etc/NetworkManager/dispatcher.d/99-origin-dns.sh

Cada vez que queiro crear una app de ejemplo me tira el siguiente error:

httpd-ex failed to create in my-project.
Imagestreamtags.image.openshift.io "httpd:2.4" not found
comentado por pablohalamaj (950 puntos) Ene 8
Perdón la demora,

El script ese es para asegurarse que el sistema resuelva IPs a travez del DNS local.
Asumo que la IP 10.32.8.201 es la de la VM.
Si desde la vm tiras un ping a www.google.com o cualquier ip, te resuelve bien?

Hace un par de versiones que no instalo OpenShift (1.5) desde zero, le voy a pegar una instalada a ver como quedan los archivos de Config y te los paso.

Saludo!
comentado por fmontaldo3 (190 puntos) Ene 24
Hola Pablo,

Perdon por la demora en mi respuesta. Realmente nose como seguir con este tema por que no logro hacerlo funcionar y en el medio me fui de vacaciones. En el caso que puedas pasarme como es la instalacion que seguis seria de mucha ayuda para mi. Te paso mi mail [email protected]ail.com.

Saludos
...