0
Openshift configuracion inicial

Open 1 Respuestas 1 Views
Hola

Luego de luchar en el laburo me dieron un server para poder hacer pruebas y mi idea es instalar openshift origin en centos. Lo que nose son los prerequisitos. Si instalo una vm con centos:

Que espacio necesito?
Que puntos de montaje mecesito crear?

Segunn lo que lei tendria que crear tres nodos y cada uno con 1y gb. Pero el resto no lose. Agradeceria si alguien me podria decir como arrancar.

Saludos y gracias

1 Respuesta

0
si vas a arrancar de zero y no te import el HA o una distribucion de roles, mira minishift , es una vm allinone que te facilita la vida o levantalo como dockers usando el cliente binario y ejecutando "oc c
luster up".

ahora si tu idea es armar algo como entorno de dev, sin ha, pero con los roles distribuidos vas a necesitar:

1 vm con 8 gb de ram para el master
2 vms con 8 gb de ram para TUS containers
2 vms con 4 gb de ram para la infra (routers, registry, metricas).

mi recomendacion, arranca con minishift, luego leeete docs.openshift.org e instalate el entorno de dev
respondido por pablo halamaj Nov 24
4Comentarios
comentado por fmontaldo3 (170 puntos) Nov 24
Hola Pablo

Muchas gracias por responderme. Lo primero que voy a jacer es crear un dns master y uno slave por el tema de resolucion de nombres. Luego voy a seguir con la instalaciones y si podes seguir ayidandome te lo agradeceria. El finde empiezo en con el dna y pongo mis comentarios.

Saludos
comentado por pablohalamaj (800 puntos) Nov 24
Dale, consultame todo lo que necesites.

Si querés hacerlo facil en lugar del DNS podés usar tu archivo hosts   pero eso no es claramente escalable ;)

Saludos!
comentado por fmontaldo3 (170 puntos) Dic 1
Pablo,

finalmente instale openshift en una vm unicamente. Segui este tutorial ya que lo que vi en la pagina de openshift para mi es un chino.

http://erikkrogstad.com/installing-openshift-and-docker-with-centos-7-on-google-cloud-gcp-and-deploy-a-new-app/

¿me podrias ayduar a verificar si la isntalacion es correcta?. Te paso el estado del cluster.

[[email protected] openshift-origin-server-v3.7.0]# oadm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/opt/openshift-origin-server-v3.7.0/openshift.local.config/master/admin.kubeconfig'
Info:  Using context for cluster-admin access: 'myproject/127-0-0-1:8443/system:admin'

[Note] Running diagnostic: ConfigContexts[default/10-20-20-200:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

ERROR: [DCli0015 from diagnostic [email protected]/origin/pkg/diagnostics/client/config_contexts.go:285]
       For client config context 'default/10-20-20-200:8443/system:admin':
       The server URL is 'https://10.20.20.200:8443'
       The user authentication is 'system:admin/10-20-20-200:8443'
       The current project is 'default'
       (*url.Error) Get https://10.20.20.200:8443/apis/project.openshift.io/v1/projects: Forbidden
       Diagnostics does not have an explanation for what this means. Please report this error so one can be added.

[Note] Running diagnostic: ConfigContexts[default/127-0-0-1:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'default/127-0-0-1:8443/system:admin':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'system:admin/127-0-0-1:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [default kube-public kube-system myproject openshift openshift-infra openshift-node]

[Note] Running diagnostic: ConfigContexts[myproject/127-0-0-1:8443/developer]
       Description: Validate client config context is complete and has connectivity

Info:  For client config context 'myproject/127-0-0-1:8443/developer':
       The server URL is 'https://127.0.0.1:8443'
       The user authentication is 'developer/127-0-0-1:8443'
       The current project is 'myproject'
       Successfully requested project list; has access to project(s):
         [myproject]

[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint

ERROR: [DCli2012 from diagnostic [email protected]/origin/pkg/diagnostics/client/run_diagnostics_pod.go:178]
       See the errors below in the output from the diagnostic pod:
       [Note] Running diagnostic: PodCheckAuth
              Description: Check that service account credentials authenticate as expected

       WARN:  [DP1005 from diagnostic Po[email protected]/origin/pkg/diagnostics/pod/auth.go:87]
              A request to the master timed out.
              This could be temporary but could also indicate network or DNS problems.

       ERROR: [DP1014 from diagnostic [email protected]/origin/pkg/diagnostics/pod/auth.go:175]
              Request to integrated registry timed out; this typically indicates network or SDN problems.

       [Note] Running diagnostic: PodCheckDns
              Description: Check that DNS within a pod works as expected

       WARN:  [DP2014 from diagnostic [email protected]/origin/pkg/diagnostics/pod/dns.go:119]
              A request to the nameserver 172.30.0.1 timed out.
              This could be temporary but could also indicate network or DNS problems.

       [Note] Summary of diagnostics execution (version v3.7.0+7ed6862):
       [Note] Warnings seen: 2
       [Note] Errors seen: 1

[Note] Running diagnostic: NetworkCheck
       Description: Create a pod on all schedulable nodes and run network diagnostics from the application standpoint

Info:  Skipping network diagnostics check. Reason: Not using openshift network plugin.

[Note] Skipping diagnostic: AggregatedLogging
       Description: Check aggregated logging integration for proper configuration
       Because: No master config file was provided

[Note] Running diagnostic: ClusterRegistry
       Description: Check that there is a working Docker registry

[Note] Running diagnostic: ClusterRoleBindings
       Description: Check that the default ClusterRoleBindings are present and contain the expected subjects

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has more subjects than expected.

       Use the `oc adm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has extra subject {ServiceAccount  horizontal-pod-autoscaler openshift-infra}.

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has more subjects than expected.

       Use the `oc adm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/system:controller:horizontal-pod-autoscaler has extra subject {ServiceAccount  horizontal-pod-autoscaler kube-system}.

Info:  clusterrolebinding/cluster-admin has more subjects than expected.

       Use the `oc adm policy reconcile-cluster-role-bindings` command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/cluster-admin has extra subject {ServiceAccount  pvinstaller default}.
Info:  clusterrolebinding/cluster-admin has extra subject {User rbac.authorization.k8s.io admin }.

[Note] Running diagnostic: ClusterRoles
       Description: Check that the default ClusterRoles are present and contain the expected permissions

[Note] Running diagnostic: ClusterRouterName
       Description: Check there is a working router

[Note] Running diagnostic: MasterNode
       Description: Check if master is also running node (for Open vSwitch)

WARN:  [DClu3004 from diagnostic [email protected]/origin/pkg/diagnostics/cluster/master_node.go:162]
       Unable to find a node matching the cluster server IP.
       This may indicate the master is not also running a node, and is unable
       to proxy to pods over the Open vSwitch SDN.

[Note] Skipping diagnostic: MetricsApiProxy
       Description: Check the integrated heapster metrics can be reached via the API proxy
       Because: The heapster service does not exist in the openshift-infra project at this time,
       so it is not available for the Horizontal Pod Autoscaler to use as a source of metrics.

[Note] Running diagnostic: NodeDefinitions
       Description: Check node records on master

[Note] Running diagnostic: RouteCertificateValidation
       Description: Check all route certificates for certificates that might be rejected by extended validation.

[Note] Skipping diagnostic: ServiceExternalIPs
       Description: Check for existing services with ExternalIPs that are disallowed by master config
       Because: No master config file was detected

[Note] Summary of diagnostics execution (version v3.7.0+7ed6862):
[Note] Warnings seen: 1
[Note] Errors seen: 2
[[email protected] openshift-origin-server-v3.7.0]#

Desde ya mucahs gracias.
comentado por pablohalamaj (800 puntos) Dic 3
Hola,

Por lo que veo hay 2 errores en el output que pasaste:

A) cuando intenta acceder a https://10.20.20.200:8443

B) cuando intenta acceder a la regsitry interna de Openshift

Parece ser tema de DNS ( Openshift es muyy dependiente  de que los DNS estén correctamente instalados ) ya que si accede a la consola mediante el 127.0.0.1 funciona.
Se me ocurren las siguientes pruebas para ver que puede estar pasando.

A) verificar como quedó la configuracion del /etc/resolv.conf en la VM  y ver que apunte a un DNS que resuelva correctamente la IP y el Hostname de la VM

B) Desde la VM hacer ping al hostname que le quedó configurado a la consola de openshift y ver que ip devuelve


El modo en que levanta Openshift en la pagina es medio complicado y muy manual, fijate si siguiendo estos pasos no te resulta mas simple

https://docs.openshift.org/latest/getting_started/administrators.html#downloading-the-binary

Nota: Antes de usar openshift, siempre tenes que tener libre los puertos 8443 y 53 y el servicio de Docker activo.

Avisame si algo de esto te funcionó.

Saludo!
...