• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Using MariaDB on NetPi
#11
Quote:For arguments sake, If I were to have a container running on the NetPi with the local MariaDB server, if the NEtPi were to lose power and restart. When the NetPi restarted would this container lose all data in the MariaDB server?

I recommend to read this chapter https://docs.docker.com/v17.09/engine/us...ontainers/

Every container when it is started will be a copy of its image plus a write layer in the layered file system. This is where all file system changes are stored during a containers life. So even if netPI is repowered this layer still exists and once a container is configured to "restart automatially" all your mariadb data will be right there.

But if you would ask me ... is the data still there if I start another container instance then I have to say in this case the data is lost. But there is also a solution in Docker for this problem and are called Volumes. A Volume can be created in Docker and a Volume can be mapped to folders in a container. Supposing you would now create a volume and map it to exactly the folder where the mariadb is storing the database. In this case the the database is not stored in the containers layer but in the volume. If you now start a new container after you deleted the old one and map this volume to your new container the "old" database is still there and used by the new container. So it will survive even new container creations.
You never fail until you stop trying.“, Albert Einstein (1879 - 1955)

  Reply
#12
I did find the MariaDB image was not usable on the netpi/Rpi, I just haven't gotten around to removing it.

Quote:My understanding is that if you are accessing the database from remote and call a write command from there I am pretty sure the mariadb will first of all do a write access to the database file on the file system and synching later. So the "bad" write for the SD card you will not be able to suppress. But correct me if I am wrong.


I don't quite follow you here. 
Currently the program works in this way. A Python script interprets the required information and saves this to a database (mariadb) locally on the netpi/rpi. this database is the master, the azure DB is a replication and syncs to the master when ever within internet range.

In terms of writes to the SD we would record about 3-5GB of data every month (if it were running 24/7) So I don't take this as an urgent issue. If we were to purchase a new SD card can we load the original NetPi image onto it?

For my own interest - in what applications would you not be writing data to the sd card to avoid this issue? I would have thought any data processing would write to the sd card?


I am starting to understand more of how docker works and had success today developing inside the container and used commit to create an image with changes.

Quote:I recommend to read this chapter https://docs.docker.com/v17.09/engine/us...ontainers/
I will have a read of this tomorrow  and follow up with additional questions.


I managed to get a mariaDB working in a container on the rpi and netpi using the netpi-raspbian image. I can write to this data base and it is syncing with azure cloud.

The python script is using websockets to call information from other devices; specifically on ports 2053 and 2052. I can run this python script on the host and it will save to the data base on the docker container.
When I run the script on the docker container it never receives any websockets. So in turn no data is received. I have mapped the ports when starting the container using 
Code:
docker run -p 2052:2052 -p 2053:2053 -p 3306:3306

can you see any issues why it is not receiving the data?
  Reply
#13
Quote: managed to get a mariaDB working in a container on the rpi and netpi using the netpi-raspbian image. I can write to this data base and it is syncing with azure cloud.

The python script is using websockets to call information from other devices; specifically on ports 2053 and 2052. I can run this python script on the host and it will save to the data base on the docker container.
When I run the script on the docker container it never receives any websockets. So in turn no data is received. I have mapped the ports when starting the container using


Let me ask you what you are running on the netPI/rpi right now?

mariadb-server image in one container A.) and a second container B.) running the python script needing access to container A.) . Is my understanding correct?
You never fail until you stop trying.“, Albert Einstein (1879 - 1955)

  Reply
#14
Everything should be running from the same container.

Currently I am using the RPi with latest image from Raspberry.org.


Docker is installed and am running a single container hilschernetpi/netpi-raspbian.
Mapped ports are 3306:3306 2052:2052 2053:2053

Inside this container I have installed MariaDB-Server and required python packages.


The database in the container is accessible from within the LAN of the host Rpi. I can run the python script from the host and it will write to the database within the container.

I want to be able to run everything from within the container.

When I run the python program in the container the websockets are not received, the programs stalls at this point waiting to receive the the websockets.
(The websockets are being sent by devices on the LAN of the host Rpi)

I can only only assume the request for the websocket from the container is not being mapped to the correct device or the sent websockets are not mapped back to the container.
  Reply
#15
I have discovered it is not the websockets that is causing the issue.


The service discovery uses udp multicast messages to determine what services are available and their corresponding IP address. 

The multicast messages are being received on the host however they are not received within the container. 

I have found a solution of using '--net host' when creating the container however from what I have read this is not a secure fix.
  Reply
#16
Well, using the "host" mode is not what you really like to do. The mode is not that bad as you think, but since this mode shares the host TCP/IP stack with the container, it exposes all ports to the host making port mapping unnecessary. And then its like your containerized app run as if they would run on the host ... and raises the containerization to question.
You never fail until you stop trying.“, Albert Einstein (1879 - 1955)

  Reply
#17
But Phil, here is what I can tell you. If you script runs in the same container as mariadb, then port mapping is not necessary at all and also using the host mode isn't.

I suppose you python script addresses an IP address to get in touch with the container ... and likely it is the Rpi IP address (I suppose).

But if phython runs in the same container then the context to address ports in the same container is the IP address 127.0.0.1:port and not any IP address of the "outside world".

So try this IP address locally or use "localhost:port" as address
You never fail until you stop trying.“, Albert Einstein (1879 - 1955)

  Reply
#18
I am now using "host" mode and all functionality seems to be working correctly.

The volume is working as it should and am able to start stop and create new containers with no loss of data.

As an example, the Rasp Pi is running on 192.168.1.235. it needs to connect to websockets being received from 192.168.1.239,192.168.1.240 etc....
As stated above though "host" most bypasses the need for port mapping and tests so far are working well.

I have come across to my next question with docker. When my container is started I need to automate some commands in the console - I believe the way to do this is with the dockerfile. How, where and when do you add these in?

The following need to be automated when the container is started.

Mariadb server started:

Code:
/etc/init.d/mysql start



Start the python script:
Code:
python3 xxxxx.py


Is there a way to store docker volumes with the image in the 'commit' command?
  Reply
#19
Well Phil,

the sources of the hilscher raspbian image you find here at https://github.com/HilscherAutomation/netPI-raspbian.

There you find the Dockerfile also. If you analyse the Dockerfile closely you see a tag in it named ENTRYPOINT. This is where Docker jumps to when a container is started always. It points to our start script. An entrypoint is a must have and is setup during container image building. So it is fix for our container, you can't change it ... yes you can start the image with the "docker run" command and point to another entrypoint ... but this is not what you really want to do. You want a container that runs right at the beginning your script-

If you now want to have your own entrypoint you have to build your own image and hence get out your personal container once you start it.

Here is the Dockerfile reference that shows you how to setup a dockerfile.

https://docs.docker.com/engine/reference/builder/

Since with the link https://github.com/HilscherAutomation/netPI-raspbian you see all the details about what else we put in the container, it would be easy for you to reuse the exisitng Dockerfile and expand it with all your needs. This is how I would do it if the rest of the images was ok for you.

But one additional remark: as you also may have seen is that our raspbian container contains much to much user space programs your never need. So I personally would prefer your write your own Dockerfile from scratch with not more than 10 lines of code from what I see you need to realize your demand and have the smallest image ever in a very compact format.

Thx
Armin
You never fail until you stop trying.“, Albert Einstein (1879 - 1955)

  Reply
#20
Since you have two commands that need to be started a single ENTRYPOINT reference will not help you.

Look to our ENTRYPOINT tag ... it points to "/etc/init.d/entrypoint.sh" ... if you now analyze this file you see it starting all the needed final scripts one after the other. This is how you have to do it as well.

Even if you commit a running container ... the entrypoint will remain. So if you do not want to write your own Dockerfile and container and want to stay with the committing procedure ... then make your edits in the running container in the "/etc/init.d/entrypoint.sh" file meeting your needs and make then a commit after exiting the container.

A commit will store only what's "inside" the container. If the mariadb data is in the container then it will be committed as well. External mapped volumes aren't.

In general: what else you can do with volumes you can read here https://docs.docker.com/storage/volumes/



Thx
Armin
You never fail until you stop trying.“, Albert Einstein (1879 - 1955)

  Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)