Bug #8409

Installer breaks

Added by Jesús García Crespo about 7 years ago. Updated over 6 years ago.

Status:InvalidStart date:05/07/2015
Priority:MediumDue date:
Assignee:Mike Gale% Done:

0%

Category:Installation
Target version:Release 2.3.0
Google Code Legacy ID: Tested version:2.2, 2.3
Sponsored:No Requires documentation:

froze.png (41.6 KB) Mike Gale, 05/11/2015 11:01 AM

History

#1 Updated by Jesús García Crespo about 7 years ago

I can't reproduce anymore. I tried with qa/2.2.x and qa/2.3.x.

#2 Updated by Jesús García Crespo about 7 years ago

  • Assignee changed from Jesús García Crespo to Mike Gale

Mike, can you reproduce?

#3 Updated by Mike Gale about 7 years ago

  • File froze.png added
  • Status changed from New to Feedback
  • Assignee changed from Mike Gale to Jesús García Crespo

It seems to be working for me too, but it's still extremely slow after you hit continue after entering your ES details (see attached SS). This was the step that was timing out for me at home on a much slower system (a VM).

#4 Updated by Mike Gale about 7 years ago

  • Status changed from Feedback to In progress
  • Assignee changed from Jesús García Crespo to Mike Gale

I'll just assign it back to me for now and see if I can reproduce it on my VM tomorrow when I work form home, actually

#5 Updated by Mike Gale about 7 years ago

Still getting a timeout after the ES configuration stage when I tried to install a fresh AtoM instance:

nginx error log:

2015/05/14 21:45:09 [error] 32337#0: *8 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: _, request: "GET /index.php/sfInstallPlugin/loadData HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.atom.sock", host: "localhost", referrer: "http://localhost/index.php/sfInstallPlugin/configureSearch"

It seems the ES index does eventually get created though (from elasticsearch.log):

[2015-05-14 21:45:07,458][INFO ][cluster.metadata ] [Dakota North] [atom_test] creating index, cause [api], shards [4]/[1], mappings []
[2015-05-14 21:45:08,627][INFO ][cluster.metadata ] [Dakota North] [atom_test] create_mapping [QubitAip]
[2015-05-14 21:45:08,648][INFO ][cluster.metadata ] [Dakota North] [atom_test] create_mapping [QubitTerm]
[2015-05-14 21:45:08,712][INFO ][cluster.metadata ] [Dakota North] [atom_test] create_mapping [QubitAccession]
[2015-05-14 21:45:08,779][INFO ][cluster.metadata ] [Dakota North] [atom_test] create_mapping [QubitActor]
[2015-05-14 21:45:08,842][INFO ][cluster.metadata ] [Dakota North] [atom_test] create_mapping [QubitRepository]
[2015-05-14 21:45:08,952][INFO ][cluster.metadata ] [Dakota North] [atom_test] create_mapping [QubitInformationObject]

#6 Updated by José Raddaoui Marín almost 7 years ago

In that step the ES index is created and all the default terms are loaded in it, so it may take a while. I'm not having problems using Apache, but it only takes about 15 secs to do it in my VM.

I think we're not setting any timeout or execution limit value in our config files for php-fpm or Nginx. Maybe increasing the default limits will fix this problem:

https://rtcamp.com/tutorials/php/increase-script-execution-time/
http://howtounix.info/howto/110-connection-timed-out-error-in-nginx

Anyway, that step takes really a while without saying nothing to the user, I'd love to see a loading bar or something like that some day.

#7 Updated by José Raddaoui Marín almost 7 years ago

We could use the Gearman Worker for this step now that is required, but it will make it mandatory ...

#8 Updated by Mike Gale almost 7 years ago

  • Priority changed from Critical to Medium
  • Target version changed from Release 2.2.0 to Release 2.3.0

Unable to reproduce this anymore outside of my VM.

#9 Updated by Jesús García Crespo over 6 years ago

  • Status changed from In progress to Invalid

Also available in: Atom PDF