QEMU/Options - Gentoo Wiki

"From the Transistor to the Web Browser" George Hotz CS Curriculum

Found this on George Hotz's Github. Thinking of following this curriculum to get into CS. Would love to know everyone's thoughts. The only thing this curriculum lacks is links and resources.

Credit: https://github.com/geohot/fromthetransistor
"Hiring is hard, a lot of modern CS education is really bad, and it's hard to find people who understand the modern computer stack from first principles.
Now cleaned up and going to be software only. Closer to being real.

Section 1: Intro: Cheating our way past the transistor -- 0.5 weeks

Section 2: Bringup: What language is hardware coded in? -- 0.5 weeks

Section 3: Processor: What is a processor anyway? -- 3 weeks

Section 4: Compiler: A “high” level language -- 3 weeks

Section 5: Operating System: Software we take for granted -- 3 weeks

Section 6: Browser: Coming online -- 1 week

Section 7: Physical: Running on real hardware -- 1 week

submitted by Cyandemption to cscareerquestions [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

"From the Transistor to the Web Browser" George Hotz CS Curriculum

Found this on George Hotz's Github. Thinking of following this curriculum to get into CS. Would love to know everyone's thoughts. The only thing this curriculum lacks is links and resources.

Credit: https://github.com/geohot/fromthetransistor
"Hiring is hard, a lot of modern CS education is really bad, and it's hard to find people who understand the modern computer stack from first principles.
Now cleaned up and going to be software only. Closer to being real.

Section 1: Intro: Cheating our way past the transistor -- 0.5 weeks

Section 2: Bringup: What language is hardware coded in? -- 0.5 weeks

Section 3: Processor: What is a processor anyway? -- 3 weeks

Section 4: Compiler: A “high” level language -- 3 weeks

Section 5: Operating System: Software we take for granted -- 3 weeks

Section 6: Browser: Coming online -- 1 week

Section 7: Physical: Running on real hardware -- 1 week

submitted by Cyandemption to lexfridman [link] [comments]

"From the Transistor to the Web Browser" George Hotz CS Curriculum

Found this on George Hotz's Github. Thinking of following this curriculum to get into CS. Would love to know everyone's thoughts. The only thing this curriculum lacks is links and resources.

Credit: https://github.com/geohot/fromthetransistor
"Hiring is hard, a lot of modern CS education is really bad, and it's hard to find people who understand the modern computer stack from first principles.
Now cleaned up and going to be software only. Closer to being real.

Section 1: Intro: Cheating our way past the transistor -- 0.5 weeks

Section 2: Bringup: What language is hardware coded in? -- 0.5 weeks

Section 3: Processor: What is a processor anyway? -- 3 weeks

Section 4: Compiler: A “high” level language -- 3 weeks

Section 5: Operating System: Software we take for granted -- 3 weeks

Section 6: Browser: Coming online -- 1 week

Section 7: Physical: Running on real hardware -- 1 week

submitted by Cyandemption to AskComputerScience [link] [comments]

Precompiled Gentoo Linux 17th Year Anniversary September 2020 update - Three complete GPU specific configurations

With my 17th year supporting Gentoo approaching in September I especially wanted to post this for Gentoo users.
I have for years found the hobby of helping people on this subreddit enjoyable and wanted to offer you all an update to aid you in these trying times with your struggles adjusting to and learning to love penguins!
As some of you know i've been a stalwart supporter of Gentoo Linux since 2003 due to the educational merits and flexibility afforded by "baking your own binaries" and configuring your own install to suit your own hardware or purpose.
This is great but the build time required to compile all that software does dissuade some people from making an attempt.
I've precompiled Gentoo Linux "stage4" tar.gz base system installs and released them on several previous occasions however i felt the changes since June warranted creating a new post update.
In December i began providing three gpu specific configurations for Intel amd and nvidia graphics cards and given positive feedback have continued this in a similar fashion to Systemd76 Pop!_ OS
featuring kde plasma with full support for 32 bit applications then add support for docker, qemu, lutris, steam, wine staging and much more!
These builds contain the base gentoo install stage used for the initial builds in the root filesystem within the tarballs for each build dated December 12 2019
These builds are an update of the finest gentoo chroot builds assembled to date i've released to the general public :)
Gentoo Linux releases system install base systems as compressed archives that include the bare minimum software necessary for Linux to reproduce itself and any other software program. Historically Gentoo used to allow users to progress to this point by building up to several "stages" from stage 1 to stage 3 then later settled on only providing stage3.
Stage4 is terminology Gentoo Linux users frequently use to refer to only the filesystem contents that comprises any completed and archived installation.
Stage4 Gentoo system backup largely replaces the install stage choices offered the gentoo install handbook
As many people have discovered that attempt using Linux software configuration can be inflexible or incompatible after it's been prepackaged for distros such as Ubuntu or Mint or you pick one and no avenue to recompile that software is provided to adapt that software for only your own hardware configuration to "fine tune" and eliminate consistency conflicts or eliminate an overabundance of software features having been supported.
*** These builds will require some customization and additional config to become bootable if you choose to proceed with further system install configuration ***
New Gentoo Linux 17.1 September 2020 build details
Stay safe in these trying times, compile long and prosper!
submitted by xartin to Gentoo [link] [comments]

Do people in this subreddit usually compile their own QEMU?

Hi,
I'm using QEMU on Ubuntu 20.04. I noticed the package maintainer's version is 4.3, but 5.1-rc3 is available, so I thought I'd give compiling it a shot
Here's my current config options:
Build directory /home/avery/build/qemu/build Source path /home/avery/build/qemu GIT binary git GIT submodules ui/keycodemapdb tests/fp/berkeley-testfloat-3 tests/fp/berkeley-softfloat-3 capstone slirp C compiler cc Host C compiler cc C++ compiler c++ Objective-C compiler clang ARFLAGS rv CFLAGS -g QEMU_CFLAGS -I/usinclude/pixman-1 -Werror -fprofile-arcs -ftest-coverage -g -pthread -I/usinclude/glib-2.0 -I/uslib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usinclude/glib-2.0 -I/uslib/x86_64-linux-gnu/glib-2.0/include -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -std=gnu99 -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -fstack-protector-strong -I/usinclude/p11-kit-1 -I/usinclude/libpng16 -I/usinclude/spice-server -I/usinclude/spice-1 -I$(SRC_PATH)/capstone/include QEMU_LDFLAGS -Wl,--warn-common -fprofile-arcs -ftest-coverage -Wl,-z,relro -Wl,-z,now -pie -m64 -fstack-protector-strong make make install install python /usbin/python3 -B (3.8.2) genisoimage /usbin/genisoimage efi_aarch64 /home/avery/build/qemu/build/pc-bios/edk2-aarch64-code.fd python_yaml yes slirp support git smbd /ussbin/smbd module support no alt path mod load no host CPU x86_64 host big endian no target list x86_64-softmmu gprof enabled no sparse enabled yes strip binaries no profiler yes static build no safe stack no SDL support yes (2.0.10) SDL image support yes GTK support yes (3.24.20) GTK GL support yes VTE support yes (0.60.3) TLS priority NORMAL GNUTLS support yes libgcrypt no nettle yes (3.5.1) XTS yes libtasn1 yes PAM yes iconv support yes curses support yes virgl support yes (0.8.2) curl support yes mingw32 support no Audio drivers alsa Block whitelist (rw) Block whitelist (ro) VirtFS support yes Multipath support no VNC support yes VNC SASL support no VNC JPEG support no VNC PNG support yes xen support no brlapi support yes Documentation no PIE yes vde support yes netmap support no Linux AIO support yes Linux io_uring support no ATTXATTR support yes Install blobs yes KVM support yes HAX support yes HVF support no WHPX support no TCG support yes TCG debug enabled yes TCG interpreter no malloc trim support no RDMA support yes PVRDMA support yes fdt support no membarrier yes preadv support yes fdatasync yes madvise yes posix_madvise yes posix_memalign yes libcap-ng support yes vhost-net support yes vhost-crypto support yes vhost-scsi support yes vhost-vsock support yes vhost-user support yes vhost-user-fs support yes vhost-vdpa support yes Trace backends log spice support yes (0.14.0/0.14.2) rbd support yes xfsctl support yes smartcard support yes libusb yes usb net redir yes OpenGL support yes OpenGL dmabufs yes libiscsi support yes libnfs support yes build guest agent yes QGA VSS support no QGA w32 disk info no QGA MSI support no seccomp support yes coroutine backend ucontext coroutine pool yes debug stack usage no mutex debugging yes crypto afalg no GlusterFS support yes gcov gcov gcov enabled yes TPM support yes libssh support yes QOM debugging yes Live block migration yes lzo support yes snappy support yes bzip2 support yes lzfse support yes zstd support yes NUMA host support yes libxml2 yes tcmalloc support yes jemalloc support no avx2 optimization yes avx512f optimization no replication support yes bochs support yes cloop support yes dmg support yes qcow v1 support yes vdi support yes vvfat support yes qed support yes parallels support yes sheepdog support yes capstone git libpmem support yes libdaxctl support yes libudev yes default devices yes plugin support yes fuzzing support no gdb /usbin/gdb rng-none yes Linux keyring yes cross containers podman NOTE: guest cross-compilers enabled: cc 
I installed as many of the dependencies as I could track down that weren't mutually exclusive with others (e.g. nettle vs libgcrypt). It's compiling right now, I imagine it's going to take a while as I am doing it on a Thinkpad T460s. Once it's done I'll upload it to github in case anyone else wants a copy.
Do people here usually use the maintainers version, or compile their own?
Edit: I just downloaded the tar.xz from the QEMU source code page, does it include the build dependencies, or is it the same as cloning the git repo?
submitted by AveryFreeman to VFIO [link] [comments]

[ANN][ANDROID MINING][AIRDROP] NewEnglandcoin: Scrypt RandomSpike

New England
New England 6 States Songs: https://www.reddit.com/newengland/comments/er8wxd/new_england_6_states_songs/
NewEnglandcoin
Symbol: NENG
NewEnglandcoin is a clone of Bitcoin using scrypt as a proof-of-work algorithm with enhanced features to protect against 51% attack and decentralize on mining to allow diversified mining rigs across CPUs, GPUs, ASICs and Android phones.
Mining Algorithm: Scrypt with RandomSpike. RandomSpike is 3rd generation of Dynamic Difficulty (DynDiff) algorithm on top of scrypt.
1 minute block targets base difficulty reset: every 1440 blocks subsidy halves in 2.1m blocks (~ 2 to 4 years) 84,000,000,000 total maximum NENG 20000 NENG per block Pre-mine: 1% - reserved for dev fund ICO: None RPCPort: 6376 Port: 6377
NewEnglandcoin has dogecoin like supply at 84 billion maximum NENG. This huge supply insures that NENG is suitable for retail transactions and daily use. The inflation schedule of NengEnglandcoin is actually identical to that of Litecoin. Bitcoin and Litecoin are already proven to be great long term store of value. The Litecoin-like NENG inflation schedule will make NewEnglandcoin ideal for long term investment appreciation as the supply is limited and capped at a fixed number
Bitcoin Fork - Suitable for Home Hobbyists
NewEnglandcoin core wallet continues to maintain version tag of "Satoshi v0.8.7.5" because NewEnglandcoin is very much an exact clone of bitcoin plus some mining feature changes with DynDiff algorithm. NewEnglandcoin is very suitable as lite version of bitcoin for educational purpose on desktop mining, full node running and bitcoin programming using bitcoin-json APIs.
The NewEnglandcoin (NENG) mining algorithm original upgrade ideas were mainly designed for decentralization of mining rigs on scrypt, which is same algo as litecoin/dogecoin. The way it is going now is that NENG is very suitable for bitcoin/litecoin/dogecoin hobbyists who can not , will not spend huge money to run noisy ASIC/GPU mining equipments, but still want to mine NENG at home with quiet simple CPU/GPU or with a cheap ASIC like FutureBit Moonlander 2 USB or Apollo pod on solo mining setup to obtain very decent profitable results. NENG allows bitcoin litecoin hobbyists to experience full node running, solo mining, CPU/GPU/ASIC for a fun experience at home at cheap cost without breaking bank on equipment or electricity.
MIT Free Course - 23 lectures about Bitcoin, Blockchain and Finance (Fall,2018)
https://www.youtube.com/playlist?list=PLUl4u3cNGP63UUkfL0onkxF6MYgVa04Fn
CPU Minable Coin Because of dynamic difficulty algorithm on top of scrypt, NewEnglandcoin is CPU Minable. Users can easily set up full node for mining at Home PC or Mac using our dedicated cheetah software.
Research on the first forked 50 blocks on v1.2.0 core confirmed that ASIC/GPU miners mined 66% of 50 blocks, CPU miners mined the remaining 34%.
NENG v1.4.0 release enabled CPU mining inside android phones.
Youtube Video Tutorial
How to CPU Mine NewEnglandcoin (NENG) in Windows 10 Part 1 https://www.youtube.com/watch?v=sdOoPvAjzlE How to CPU Mine NewEnglandcoin (NENG) in Windows 10 Part 2 https://www.youtube.com/watch?v=nHnRJvJRzZg
How to CPU Mine NewEnglandcoin (NENG) in macOS https://www.youtube.com/watch?v=Zj7NLMeNSOQ
Decentralization and Community Driven NewEnglandcoin is a decentralized coin just like bitcoin. There is no boss on NewEnglandcoin. Nobody nor the dev owns NENG.
We know a coin is worth nothing if there is no backing from community. Therefore, we as dev do not intend to make decision on this coin solely by ourselves. It is our expectation that NewEnglandcoin community will make majority of decisions on direction of this coin from now on. We as dev merely view our-self as coin creater and technical support of this coin while providing NENG a permanent home at ShorelineCrypto Exchange.
Twitter Airdrop
Follow NENG twitter and receive 100,000 NENG on Twitter Airdrop to up to 1000 winners
Graphic Redesign Bounty
Top one award: 90.9 million NENG Top 10 Winners: 500,000 NENG / person Event Timing: March 25, 2019 - Present Event Address: NewEnglandcoin DISCORD at: https://discord.gg/UPeBwgs
Please complete above Twitter Bounty requirement first. Then follow Below Steps to qualify for the Bounty: (1) Required: submit your own designed NENG logo picture in gif, png jpg or any other common graphic file format into DISCORD "bounty-submission" board (2) Optional: submit a second graphic for logo or any other marketing purposes into "bounty-submission" board. (3) Complete below form.
Please limit your submission to no more than two total. Delete any wrongly submitted or undesired graphics in the board. Contact DISCORD u/honglu69#5911 or u/krypton#6139 if you have any issues.
Twitter Airdrop/Graphic Redesign bounty sign up: https://goo.gl/forms/L0vcwmVi8c76cR7m1
Milestones
Roadmap
NENG v1.4.0 Android Mining, randomSpike Evaluation https://github.com/ShorelineCrypto/NewEnglandCoin/releases/download/NENG_2020_Q3_report/NENG_2020_Q3_report.pdf
RandomSpike - NENG core v1.3.0 Hardfork Upgrade Proposal https://github.com/ShorelineCrypto/NewEnglandCoin/releases/download/2020Q1_Report/Scrypt_RandomSpike_NENGv1.3.0_Hardfork_Proposal.pdf
NENG Security, Decentralization & Valuation
https://github.com/ShorelineCrypto/NewEnglandCoin/releases/download/2019Q2_report/NENG_Security_Decentralization_Value.pdf
Whitepaper v1.0 https://github.com/ShorelineCrypto/NewEnglandCoin/releases/download/whitepaper_v1.0/NENG_WhitePaper.pdf
DISCORD https://discord.gg/UPeBwgs
Explorer
http://www.findblocks.com/exploreNENG http://86.100.49.209/exploreNENG http://nengexplorer.mooo.com:3001/
Step by step guide on how to setup an explorer: https://github.com/ShorelineCrypto/nengexplorer
Github https://github.com/ShorelineCrypto/NewEnglandCoin
Wallet
Android with UserLand App (arm64/armhf), Chromebook (x64/arm64/armhf): https://github.com/ShorelineCrypto/NewEnglandCoin/releases/tag/v1.4.0.5
Linux Wallet (Ubuntu/Linux Mint, Debian/MX Linux, Arch/Manjaro, Fedora, openSUSE): https://github.com/ShorelineCrypto/NewEnglandCoin/releases/tag/v1.4.0.3
MacOS Wallet (10.11 El Capitan or higher): https://github.com/ShorelineCrypto/NewEnglandCoin/releases/tag/v1.4.0.2
Android with GNUroot on 32 bits old Phones (alpha release) wallet: https://github.com/ShorelineCrypto/NewEnglandCoin/releases/tag/v1.4.0
Windows wallet: https://github.com/ShorelineCrypto/NewEnglandCoin/releases/tag/v1.3.0.1
addnode ip address for the wallet to sync faster, frequently updated conf file: https://github.com/ShorelineCrypto/cheetah_cpumineblob/mastenewenglandcoin.conf-example
How to Sync Full Node Desktop Wallet https://www.reddit.com/NewEnglandCoin/comments/er6f0q/how_to_sync_full_node_desktop_wallet/
TWITTER https://twitter.com/newenglandcoin
REDDIT https://www.reddit.com/NewEnglandCoin/
Cheetah CPU Miner Software https://github.com/ShorelineCrypto/cheetah_cpuminer
Solo Mining with GPU or ASIC https://bitcointalk.org/index.php?topic=5027091.msg52187727#msg52187727
How to Run Two Full Node in Same Desktop PC https://bitcointalk.org/index.php?topic=5027091.msg53581449#msg53581449
ASIC/GPU Mining Pools Warning to Big ASIC Miners Due to DynDiff Algo on top of Scrypt, solo mining is recommended for ASIC/GPU miners. Further more, even for mining pools, small mining pool will generate better performance than big NENG mining pool because of new algo v1.2.x post hard fork.
The set up configuration of NENG for scrypt pool mining is same as a typical normal scrypt coin. In other word, DynDiff on Scrypt algo is backward compatible with Scrypt algo. Because ASIC/GPU miners rely on CPU miners for smooth blockchain movement, checkout bottom of "Latest News" section for A WARNING to All ASIC miners before you decide to dump big ASIC hash rate into NENG mining.
(1) Original DynDiff Warning: https://bitcointalk.org/index.php?topic=5027091.msg48324708#msg48324708 (2) New Warning on RandomSpike Spike difficulty (244k) introduced in RandomSpike served as roadblocks to instant mining and provide security against 51% attack risk. However, this spike difficulty like a roadblock that makes big ASIC mining less profitable. In case of spike block to be mined, the spike difficulty immediately serve as base difficulty, which will block GPU/ASIC miners effectively and leave CPU cheetah solo miners dominating mining almost 100% until next base difficulty reset.
FindBlocks http://findblocks.com/
CRpool http://crpool.xyz/
Cminors' Pool http://newenglandcoin.cminors-pool.com/
SPOOL https://spools.online/
Exchange
📷
https://shorelinecrypto.com/
Features: anonymous sign up and trading. No restriction or limit on deposit or withdraw.
The trading pairs available: NewEnglandcoin (NENG) / Dogecoin (DOGE)
Trading commission: A round trip trading will incur 0.10% trading fees in average. Fees are paid only on buyer side. buy fee: 0.2% / sell fee: 0% Deposit fees: free for all coins Withdraw fees: ZERO per withdraw. Mining fees are appointed by each coin blockchain. To cover the blockchain mining fees, there is minimum balance per coin per account: * Dogecoin 2 DOGE * NewEnglandcoin 1 NENG
Latest News Aug 30, 2020 - NENG v1.4.0.5 Released for Android/Chromebook Upgrade with armhf, better hardware support https://bitcointalk.org/index.php?topic=5027091.msg55098029#msg55098029
Aug 11, 2020 - NENG v1.4.0.4 Released for Android arm64 Upgrade / Chromebook Support https://bitcointalk.org/index.php?topic=5027091.msg54977437#msg54977437
Jul 30, 2020 - NENG v1.4.0.3 Released for Linux Wallet Upgrade with 8 Distros https://bitcointalk.org/index.php?topic=5027091.msg54898540#msg54898540
Jul 21, 2020 - NENG v1.4.0.2 Released for MacOS Upgrade with Catalina https://bitcointalk.org/index.php?topic=5027091.msg54839522#msg54839522
Jul 19, 2020 - NENG v1.4.0.1 Released for MacOS Wallet Upgrade https://bitcointalk.org/index.php?topic=5027091.msg54830333#msg54830333
Jul 15, 2020 - NENG v1.4.0 Released for Android Mining, Ubuntu 20.04 support https://bitcointalk.org/index.php?topic=5027091.msg54803639#msg54803639
Jul 11, 2020 - NENG v1.4.0 Android Mining, randomSpike Evaluation https://bitcointalk.org/index.php?topic=5027091.msg54777222#msg54777222
Jun 27, 2020 - Pre-Announce: NENG v1.4.0 Proposal for Mobile Miner Upgrade, Android Mining Start in July 2020 https://bitcointalk.org/index.php?topic=5027091.msg54694233#msg54694233
Jun 19, 2020 - Best Practice for Futurebit Moonlander2 USB ASIC on solo mining mode https://bitcointalk.org/index.php?topic=5027091.msg54645726#msg54645726
Mar 15, 2020 - Scrypt RandomSpike - NENG v1.3.0.1 Released for better wallet syncing https://bitcointalk.org/index.php?topic=5027091.msg54030923#msg54030923
Feb 23, 2020 - Scrypt RandomSpike - NENG Core v1.3.0 Relased, Hardfork on Mar 1 https://bitcointalk.org/index.php?topic=5027091.msg53900926#msg53900926
Feb 1, 2020 - Scrypt RandomSpike Proposal Published- NENG 1.3.0 Hardfork https://bitcointalk.org/index.php?topic=5027091.msg53735458#msg53735458
Jan 15, 2020 - NewEnglandcoin Dev Team Expanded with New Kickoff https://bitcointalk.org/index.php?topic=5027091.msg53617358#msg53617358
Jan 12, 2020 - Explanation of Base Diff Reset and Effect of Supply https://www.reddit.com/NewEnglandCoin/comments/envmo1/explanation_of_base_diff_reset_and_effect_of/
Dec 19, 2019 - Shoreline_tradingbot version 1.0 is released https://bitcointalk.org/index.php?topic=5121953.msg53391184#msg53391184
Sept 1, 2019 - NewEnglandcoin (NENG) is Selected as Shoreline Tradingbot First Supported Coin https://bitcointalk.org/index.php?topic=5027091.msg52331201#msg52331201
Aug 15, 2019 - Mining Update on Effect of Base Difficulty Reset, GPU vs ASIC https://bitcointalk.org/index.php?topic=5027091.msg52169572#msg52169572
Jul 7, 2019 - CPU Mining on macOS Mojave is supported under latest Cheetah_Cpuminer Release https://bitcointalk.org/index.php?topic=5027091.msg51745839#msg51745839
Jun 1, 2019 - NENG Fiat project is stopped by Square, Inc https://bitcointalk.org/index.php?topic=5027091.msg51312291#msg51312291
Apr 21, 2019 - NENG Fiat Project is Launched by ShorelineCrypto https://bitcointalk.org/index.php?topic=5027091.msg50714764#msg50714764
Apr 7, 2019 - Announcement of Fiat Project for all U.S. Residents & Mobile Miner Project Initiation https://bitcointalk.org/index.php?topic=5027091.msg50506585#msg50506585
Apr 1, 2019 - Disclosure on Large Buying on NENG at ShorelineCrypto Exchange https://bitcointalk.org/index.php?topic=5027091.msg50417196#msg50417196
Mar 27, 2019 - Disclosure on Large Buying on NENG at ShorelineCrypto Exchange https://bitcointalk.org/index.php?topic=5027091.msg50332097#msg50332097
Mar 17, 2019 - Disclosure on Large Buying on NENG at ShorelineCrypto Exchange https://bitcointalk.org/index.php?topic=5027091.msg50208194#msg50208194
Feb 26, 2019 - Community Project - NewEnglandcoin Graphic Redesign Bounty Initiated https://bitcointalk.org/index.php?topic=5027091.msg49931305#msg49931305
Feb 22, 2019 - Dev Policy on Checkpoints on NewEnglandcoin https://bitcointalk.org/index.php?topic=5027091.msg49875242#msg49875242
Feb 20, 2019 - NewEnglandCoin v1.2.1 Released to Secure the Hard Kork https://bitcointalk.org/index.php?topic=5027091.msg49831059#msg49831059
Feb 11, 2019 - NewEnglandCoin v1.2.0 Released, Anti-51% Attack, Anti-instant Mining after Hard Fork https://bitcointalk.org/index.php?topic=5027091.msg49685389#msg49685389
Jan 13, 2019 - Cheetah_CpuMiner added support for CPU Mining on Mac https://bitcointalk.org/index.php?topic=5027091.msg49218760#msg49218760
Jan 12, 2019 - NENG Core v1.1.2 Released to support MacOS OSX Wallet https://bitcointalk.org/index.php?topic=5027091.msg49202088#msg49202088
Jan 2, 2019 - Cheetah_Cpuminer v1.1.0 is released for both Linux and Windows https://bitcointalk.org/index.php?topic=5027091.msg49004345#msg49004345
Dec 31, 2018 - Technical Whitepaper is Released https://bitcointalk.org/index.php?topic=5027091.msg48990334#msg48990334
Dec 28, 2018 - Cheetah_Cpuminer v1.0.0 is released for Linux https://bitcointalk.org/index.php?topic=5027091.msg48935135#msg48935135
Update on Dec 14, 2018 - NENG Blockchain Stuck Issue https://bitcointalk.org/index.php?topic=5027091.msg48668375#msg48668375
Nov 27, 2018 - Exclusive for PC CPU Miners - How to Steal a Block from ASIC Miners https://bitcointalk.org/index.php?topic=5027091.msg48258465#msg48258465
Nov 28, 2018 - How to CPU Mine a NENG block with window/linux PC https://bitcointalk.org/index.php?topic=5027091.msg48298311#msg48298311
Nov 29, 2018 - A Warning to ASIC Miners https://bitcointalk.org/index.php?topic=5027091.msg48324708#msg48324708
Disclosure: Dev Team Came from ShorelineCrypto, a US based Informatics Service Business offering Fee for service for Coin Creation, Coin Exchange Listing, Blockchain Consulting, etc.
submitted by honglu69 to NewEnglandCoin [link] [comments]

Precompiled Gentoo Linux 17.1 September 2020 update - Three complete GPU specific configurations

I have for years found the hobby of helping people on this subreddit enjoyable and wanted to offer you all an update to aid you in these trying times with your struggles adjusting to and learning to love penguins!
As some of you know i've been a stalwart supporter of Gentoo Linux since 2003 due to the educational merits and flexibility afforded by "baking your own binaries" and configuring your own install to suit your own hardware or purpose.
This is great but the build time required to compile all that software does dissuade some people from making an attempt.
I've precompiled Gentoo Linux "stage4" tar.gz base system installs and released them on several previous occasions however i felt the changes since June warranted creating a new post update.
In December i began providing three gpu specific configurations for Intel amd and nvidia graphics cards and given positive feedback have continued this in a similar fashion to Systemd76 Pop!_ OS
featuring kde plasma with full support for 32 bit applications then add support for docker, qemu, lutris, steam, wine staging and much more!
These builds contain the base gentoo install stage used for the initial builds in the root filesystem within the tarballs for each build dated December 12 2019
These builds are an update of the finest gentoo chroot builds assembled to date i've released to the general public :)
Gentoo Linux releases system install base systems as compressed archives that include the bare minimum software necessary for Linux to reproduce itself and any other software program. Historically Gentoo used to allow users to progress to this point by building up to several "stages" from stage 1 to stage 3 then later settled on only providing stage3.
Stage4 is terminology Gentoo Linux users frequently use to refer to only the filesystem contents that comprises any completed and archived installation.
Stage4 Gentoo system backup largely replaces the install stage choices offered the gentoo install handbook
As many people have discovered that attempt using Linux software configuration can be inflexible or incompatible after it's been prepackaged for distros such as Ubuntu or Mint or you pick one and no avenue to recompile that software is provided to adapt that software for only your own hardware configuration to "fine tune" and eliminate consistency conflicts or eliminate an overabundance of software features having been supported.
*** These builds will require some customization and additional config to become bootable if you choose to proceed with further system install configuration ***
New Gentoo Linux 17.1 September 2020 build details
September 2020 release updates are available from my webserver
Stay safe in these trying times, compile long and prosper!
submitted by xartin to linux4noobs [link] [comments]

Precompiled Gentoo Linux 17.1 June 2020 update - Three complete GPU specific configurations

I have for years found the hobby of helping people on this subreddit enjoyable and wanted to offer you all an update to aid you in these trying times with your struggles adjusting to and learning to love penguins!
As some of you know i've been a stalwart supporter of Gentoo Linux since 2003 due to the educational merits and flexibility afforded by "baking your own binaries" and configuring your own install to suit your own hardware or purpose.
This is great but the build time required to compile all that software does dissuade some people from making an attempt.
I've precompiled Gentoo Linux "stage4" tar.gz base system installs and released them on this sub in December 2019 and several previous occasions however i felt the changes since December warranted creating a new semi annual post update.
In December i began providing three gpu specific configurations for Intel amd and nvidia graphics cards and given positive feedback have continued this in a similar fashion to Systemd76 Pop!_ OS
featuring kde plasma with full support for 32 bit applications then add support for docker, qemu, lutris, steam, wine staging 5.9 and much more!
These builds contain the base gentoo install stage used for the initial builds in the root filesystem within the tarballs for each build dated December 12 2019
These builds are an update of the finest gentoo chroot builds assembled to date i've released to the general public :)
Gentoo Linux releases system install base systems as compressed archives that include the bare minimum software necessary for Linux to reproduce itself and any other software program. Historically Gentoo used to allow users to progress to this point by building up to several "stages" from stage 1 to stage 3 then later settled on only providing stage3.
Stage4 is terminology Gentoo Linux users frequently use to refer to only the filesystem contents that comprises any completed and archived installation.
Stage4 Gentoo system backup largely replaces the install stage choices offered the gentoo install handbook
As many people have discovered that attempt using Linux software configuration can be inflexible or incompatible after it's been prepackaged for distros such as Ubuntu or Mint or you pick one and no avenue to recompile that software is provided to adapt that software for only your own hardware configuration to "fine tune" and eliminate consistency conflicts or eliminate an overabundance of software features having been supported.
*** These builds will require some customization and additional config to become bootable if you choose to proceed with further system install configuration ***
New Gentoo Linux 17.1 June 2020 build details
September 2020 release updates are available from my webserver
Stay safe in these trying times, compile long and prosper!
submitted by xartin to linux4noobs [link] [comments]

Catalina with Broadwell GVT-g on Linux [Take 2]

Catalina with Broadwell GVT-g on Linux [Take 2]
Hello again, Reddit!
We're back!
Life took over and high school didn't get any easier. My apologies for the 9 month delay in this promised continued attempt from the previous post: https://www.reddit.com/hackintosh/comments/c0nrc8/catalina_with_broadwell_gvtg_in_linux/
This is going to be a long post, as this project has had several incarnations and lots of people wondering about it. I will be reaching out to as many of you as possible now that the coronavirus has given me several weeks out of physical school.
Table of Contents
  1. Current Hardware/Software
  2. Modification attempts so far
  3. Details on current issues/failures
  4. Addressing 9 months worth of community backlog
  5. Plan for getting this to work
I. Current Hardware/Software configs
TL;DR: 1) Linux 5.6-rc7 WITH patch, 2) qemu 4.2.0, 3) Ubuntu 20.04 dev branch
I am still using OSX-KVM's basic setup, including their prebuilt clover and some inspiration from their ng boot script.
Time went on and I'm still with the same MacBookAir7,2 but now on Ubuntu 20.04 (focal) dev branch. I also have a clean 10.15.3 install (working and booting) along with a custom compiled 5.6-rc7 kernel WITH the following patch for edid on BDW host:
https://lists.freedesktop.org/archives/intel-gvt-dev/2019-Decembe006185.html
I have a custom compiled qemu-4.2.0 for the latest possible code. I'm sure it's been updated since I compiled it about 2 months ago and am working on updating it.
My boot config to facilitate debugging:
boot-args= -v amfi_get_out_of_my_way=0x1 serial=1 intcoproc_unrestricted=1 amfi_allow_any_signature=1 amfi_unrestrict_task_for_pid=1 PE_i_can_has_debugger=1
csr_active_config=0x80 (new value that unrestricts everything)
edid: I used https://edid.tv/edid/98/. Just download the binary and xxd -p it into the Clover Configurator CustomEDID blank. You can use any edid like this. You can also just use my config.plist from the drive folder; it has this already set.
If you'd like the full configs I'm using, please see the following google drive folder:
https://drive.google.com/drive/folders/1C4g2QxRB59biBb9qtx7hpVPZgQHttXOk?usp=sharing
If you're going to use the scripts I made, you'll need to edit:
make_vfio.sh: the chown line; replace with your user
qemu-install2.sh: drives, vfio path (if not using mine), net config (if not using mine)
net_kholia.sh: the tunctl command, replace my username with yours
II. Modification attempts so far
Clover: Right now, I have an ig-platform-id=0x16260006 to match my real macbook air. I also have set InjectIntel=true which seems to fix the new error: "[IGPU] Graphics driver failed to load: could not register with Framebuffer driver!".
Linux GVTg KERNEL: The edid BDW enablement patch is ONE of the two options for enabling QE/CI on the macOS accelerator kext. The other is a VM-side patch, possibly a binpatch or a clover EDID injection. I tried both; neither currently works.
Linux GVTg USERSPACE: No patches. I have a custom compiled, but vanilla, qemu 4.2.0.
macOS: no binpatches. It seems the kernel panic trigger that had to be binpatched in the past no longer exists, or perhaps the code has been rewritten internally. Reverse-engineering BDWGraphics to find out what is and isn't happening is definitely something to look to in the near future. It is possible that this was fixed by kvm.ignore_msrs=1 boot-arg, this linux arg also allows for non-penryn cpus to be used (I am using -cpu host in my qemu script).

III. Details on current issues/failures
  1. Current status: macOS booting with BDW kexts loaded but no display detected and possible BDW kext self-disabling.

https://preview.redd.it/3znp89wdano41.png?width=1280&format=png&auto=webp&s=4f008f37a44707dc1da28a628066be7698cb5093
qemu log shows: qemu-system-x86_64: vfio_pci_write_config(a297db4a-f4c2-11e6-90f6-d3b88d6c9525, 0x4, 0x900417, 0x4) failed: Bad address
This, along with the fact that the earlier kernel panic no longer occurs, AND the lack of BDW messages printed to kernel log, leads me to believe that somewhere in the BDW binary there is some logic failure. I may be wrong though:
Something seems to have changed, or it may just be me now with the MSR's being ignored having fixed the original panic that still could occur. Either way, there's no way to be sure if clover CustomEDID is working or not. It didn't work last time when the BDW kexts definitively did load and we saw printf's of it doing loading routines. There's a lot of uncertainty as I only just got this up and running today.
2) Kernel EDID patch: This came out around December and I'm very naive for not realizing I could've made this patch myself. It simply removed the Skylake/Kabylake platform detection logic and makes the edid function work on all platforms. Regardless, with the patch, a kernel oops occurs on the function intel_vgpu_reg_rw_edid in drivers/drm/i915/kvmgt.c. It is a null pointer dereference, working on getting the kprintf from it. This is a current area of attention. It may be because I'm using xres=1280 yres=800 on a GVT with maxres 1024x768, I'll work on using the 1920x1200 one instead and seeing if it still crashes.
The commit log for the patch from the intel guy said that all platforms should support the edid region. If anyone could test EDID on an "officially" supported platform, either Skylake or Kabylake, and see if you get the same oops with 5.6-rc7, please do so. If it just oopses on all platforms due to a regression, I may be able to compile a different kernel that doesn't cause a dereference. If Broadwell really doesn't support the EDID region when forced to, then this may be a blocking issue for the whole project (I don't possess any later hardware). WORKING ON THIS RIGHT NOW
IV. Addressing 9 months worth of community backlog
I don't want to be the kid whining about high school. I generally do very well, but it definitely takes some effort being at a infamously-academically difficult private school in the Orlando area. Now that we're "off" for several weeks, I'm prepared to dedicate a lot of time to getting this furthered.
amorooc ct_the_man_doll I saw your thread here: https://www.reddit.com/VFIO/comments/a2bnv3/state_of_gvtg_macos_support/
Please let me know all your questions! I will be active on reddit through the next several weeks. Have y'all been doing GVT-g since then?
TheRacerMaster I'd love to hear your thoughts. Have you been in the GVT-g scene since the High Sierra attempt? Contact me if you'd like to work on this privately; otherwise this post should be good to document progress for everyone.
spicypixel I saw your comment on the original Catalina attempt, as of now it is no longer abandoned!
davidgarazaz lilolalu please take a look here!
TrashConvo it's working but no display yet. I have screensharing on and using that to force using the BDW (-vga none).
/u/WesolyKubeczek you have the most promising story. I may be able to get there if I can get BDW edid working (not supported by a simple logic fail on kvmgt.c). Please tell us about if you ever got anywhere further?
8700t I'm curious: what binpatches with lilu? How did your demo work?
sobe3249 yes, I have the same vfio invalid issue. Currently investigating. Help would be appreciated!


If there's anyone I've missed, I didn't forget about you. This project has definitely grown further than I ever expected it to, beyond a weekend attempt. I'm crossposting this to several subreddits to make sure everyone who I wasn't able to get to in 9 months has a chance to participate in some real progress once more.
Thank you all! Looking forward to hearing from all of you.
V. Plan for getting this to work.
  1. Kernel EDID oops: working on this. If I can get this to work, then we may be a step away from QE/CI as the drivers seem to load?
  2. BDWGraphics: there are no longer any printfs and a weird pci invalid region. Any thoughts on this? No kernel panic anymore, it's likely due to the msr's being ignored with boot-arg. But there's no [IGPU] init printfs anymore. That worries me, though it could just be a code rewrite by Apple/Intel.
  3. Qemu: currently working on the crash connected to the edid patch.
Theoretically, all we need to get working is an EDID injection. It could be in Clover, another bootloader, or in the linux kernel vfio itself. Perhaps that new hip bootloader that everyone's suddenly using would be worth trying if it has edid patching functionality? I have no idea what it is besides that its called OpenCore or something like that.
submitted by newhacker1746 to hackintosh [link] [comments]

Precompiled Gentoo Linux 17.1 Extra Special December update - Three complete GPU specific configurations

I have for years found the hobby of helping people on this subreddit enjoyable and wanted to offer you all something extraordinarily special for December 2019 to aid you with your struggles adjusting to and learning to love penguins!
As some of you know i've been a stalwart supporter of Gentoo Linux since 2003 due to the educational merits and flexibility afforded by "baking your own binaries" and configuring your own install to suit your own hardware or purpose.
This is great but the build time required to compile all that software does dissuade some people from making an attempt.
I precompiled a Gentoo Linux "stage4" tar.gz base system install and released it on this sub last month however I wanted to offer something really special for December.
I began building a new amdgpu gentoo install last week for a spare pc build but this also presented an opportunity to consider providing something new for you guys and gals!
I considered releasing an easy update to last months build but why give away one half completed build when i can give away three complete GPU specific configured system builds with full support for 32 bit applications then add support for docker, qemu, lutris, steam, wine staging 4.21 and more!
Added bonus for you folks with newer AMD RX 5700XT graphics cards the amdgpu build includes mesa 19.2.7 and the wine staging build for amdgpu includes vulkan direct12 support.
Built from a fresh Gentoo stage 3 systemd tarball dated December 3rd 2019.
The finest builds assembled to date i've released to the general public :)
Gentoo Linux releases system install base systems as compressed archives that include the bare minimum software necessary for Linux to reproduce itself and any other software program. Historically Gentoo used to allow users to progress to this point by building up to several "stages" from stage 1 to stage 3 then later settled on only providing stage3.
Stage4 is terminology Gentoo Linux users frequently use to refer to only the filesystem contents that comprises any completed and archived installation.
Stage4 Gentoo system backup largely replaces the install stage choices offered the gentoo install handbook
As many people have discovered that attempt using Linux software configuration can be inflexible or incompatible after it's been prepackaged for distros such as Ubuntu or Mint or you pick one and no avenue to recompile that software is provided to adapt that software for only your own hardware configuration to "fine tune" and eliminate consistency conflicts or eliminate an overabundance of software features having been supported.
*** These builds will require some customization and additional config to become bootable if you choose to proceed with further system install configuration ***
New Gentoo Linux 17.1 December 2019 build details
The current precompiled stage4 tarballs can be downloaded from my Google drive
Merry Christmas and a Happy New Year!
submitted by xartin to linux4noobs [link] [comments]

Cant passthrough RX 5700 XT

I was trying to passtrough my only gpu but there seems to be a problem with vfio.
CPU: Ryzen 1700X
GPU: Sapphire pulse rx 5700 xt
Mobo: Asus Rog strix X370-F
Bios options: SVM : Enabled, SR-IOV : Disabled
OS: arch , kernel 5.2.11-arch1-1-ARCH
Kernel parameters: "amd_iommu=on iommu=pt loglevel=3 quiet"

mkinitcpio.conf (comments are ommited)
MODULES=(vfio_pci vfio vfio_iommu_type1 vfio_virqfd) BINARIES=() FILES=() HOOKS=(base udev autodetect modconf block filesystems keyboard fsck) 
/etc/modprobe.d/vfio.conf
options vfio_pci ids=1002:731f,1002:ab38 
iommu groups
IOMMU Group 0: 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Group 1: 00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] IOMMU Group 10: 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454] IOMMU Group 11: 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59) 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51) IOMMU Group 12: 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460] 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461] 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462] 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463] 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464] 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465] 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466] 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467] IOMMU Group 13: 01:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808] IOMMU Group 14: 02:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset USB 3.1 xHCI Controller [1022:43b9] (rev 02) 02:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset SATA Controller [1022:43b5] (rev 02) 02:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset PCIe Upstream Port [1022:43b0] (rev 02) 03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) 03:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) 03:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) 03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) 03:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) 03:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02) 04:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller [1b21:1242] 05:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03) IOMMU Group 15: 0a:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:1478] (rev c1) IOMMU Group 16: 0b:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:1479] IOMMU Group 17: 0c:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5700 / 5700 XT] [1002:731f] (rev c1) IOMMU Group 18: 0c:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio [1002:ab38] IOMMU Group 19: 0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:145a] IOMMU Group 2: 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] IOMMU Group 20: 0d:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456] IOMMU Group 21: 0d:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c] IOMMU Group 22: 0e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:1455] IOMMU Group 23: 0e:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51) IOMMU Group 24: 0e:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457] IOMMU Group 3: 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Group 4: 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Group 5: 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453] IOMMU Group 6: 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Group 7: 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] IOMMU Group 8: 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454] IOMMU Group 9: 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452] 
xml
 win10 facb7abd-ac3e-4a04-8e86-d6944b62d723      8388608 8388608 16  hvm /usshare/ovmf/x64/OVMF_CODE.fd /valib/libvirt/qemu/nvram/win10_VARS.fd                      destroy restart destroy      /usbin/qemu-system-x86_64     
Output of "dmesg | grep vfio" before starting vm
[ 1.279191] vfio-pci 0000:0c:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 1.293682] vfio_pci: add [1002:731f[ffffffff:ffffffff]] class 0x000000/00000000 [ 1.310411] vfio_pci: add [1002:ab38[ffffffff:ffffffff]] class 0x000000/00000000 
full output: https://pastebin.com/UVaxAWWU

Output of "dmesg | grep vfio" after starting vm
[ 1.279191] vfio-pci 0000:0c:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 1.293682] vfio_pci: add [1002:731f[ffffffff:ffffffff]] class 0x000000/00000000 [ 1.310411] vfio_pci: add [1002:ab38[ffffffff:ffffffff]] class 0x000000/00000000 [ 358.866916] vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap [email protected] [ 358.866927] vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap [email protected] [ 358.866930] vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap [email protected] [ 358.866931] vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap [email protected] [ 358.866933] vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap [email protected] [ 358.867808] vfio-pci 0000:0c:00.0: BAR 0: can't reserve [mem 0xe0000000-0xefffffff 64bit pref] [ 361.571367] vfio-pci 0000:0c:00.0: No more image in the PCI ROM [ 361.571388] vfio-pci 0000:0c:00.0: No more image in the PCI ROM 
full output: https://pastebin.com/VyxF9Y88

Since I only have one gpu I am forwarding X11 to a laptop and running virt-manager from there, windows starts and works fine but on my main display I only get a blinking cursor after starting the vm.

Edit: Thanks to u/cybervseas and this post https://www.reddit.com/VFIO/comments/7kpw33/cant_passthrough_boot_gpu_did_i_miss_something/ I got the vm working.
I added a line to the xml file that points to the rom file to be used. You can get that file from https://www.techpowerup.com/vgabios/ , download gpu-z and dump it or any other way to get the rom of your gpu.
    
The last thing was changing some settings in grub. I added the kernel parameter 'nofb' and changed GRUB_GFXPAYLOAD from 'keep' to 'text' in the file at /etc/default/grub.
Everything seems to work now I am writing this from inside the vm and I had no problems with the driver installation it works the same way as if windows was the host os.
submitted by diogo464 to VFIO [link] [comments]

How to: Run a stratum with a different processor architecture (Like ARM running on x64)!

I did some testing and with some help from ParadigmComplex I was able to get different architectures working! This will let you use any distro of any architecture using qemu, and binfmt to automatically run the qemu binary for us. Here's what I did:
Step 1. Install qemu-user-static and binfmt-support. Those packages are available in the AUR, and I believe they are in the Debian & Ubuntu repos as well, can't speak for other distributions (just download a compatible stratum if needed!) I personally had to install qemu-user-static-bin from the AUR because compiling failed.
Step 2. Enable binfmt. For me it was just sudo update-binfmts --enable
Step 3. As the root user, download and extract a prebuilt rootfs of a distro (for example, Arch Linux ARM for armv8 is what I used) to /bedrock/strata/. e.g. mkdir /bedrock/strata/arch-arm; tar -xf rootfs.tar.gz -C /bedrock/strata/arch-arm
You can also create various rootfs's in a directory using tools like image-bootstrap.
Step 3. Grab the relevant qemu binary (in my case, qemu-aarch64-static) and copy it to your new stratum's /usbin. e.g. cp /bedrock/strata/arch/usbin/qemu-aarch64-static /bedrock/strata/arch-arm/usbin
Step 4. Just about done! Run brl show and then brl enable
Step 5 (Optional). Check that it's working, if strat -r uname -a shows the target architecture, then you're good!
For me, I had to disable signature checks and free space checks in Arch Linux ARM, because I got a few errors, but after that, pacman worked totally fine. Apparently using bedrock instead of a DIY chroot fixed these issues!
Never thought that Bedrock would be able to handle things like this, but it seems to not care at all about the slightly wonky setup.
Screenshot of my setup: https://imgur.com/1xhdTaw
submitted by cd109876 to bedrocklinux [link] [comments]

Gentoo Linux+QEMU KVM+AMD RX 560 GPU Passthrough+HighSierra/Win10 (Both Successful and Near-Native)

READ THE UPDATE BELOW. I FINALLY got all three (yes 3) of Gentoo, High Sierra and Windows 10 all running at the same time on the same machine, with the High Sierra on RX 560, and the Windows 10 on Nvidia GTX 1080, and Gentoo host on intel HDA 630 all at the same time. HAHAHA!
"Hardware and Software Hybridization of Guest Operating Systems"
by rev0lt
Experiment's Goals:
(1) High Sierra at near-native speed on Linux QEMU KVM with AMD GPU Passthrough (Success);
(2) Win10 at near-native speed on Linux QEMU KVM with AMD GPU Passthrough (Success);
(3) To achieve (1) and (2) but using Nvidia GPU (Successful on Windows10, everything works perfect; able to boot on High Sierra boot screen, but it ends with a stop sign); and
(4) To achieve (1) and (2) simultaneously. (Success!). I got Linux+HighSierra+Windows10 all running at the same time on the same machine.
Rationale ("Why?"):
Can you feel it?
Jokes aside, a bit of a brief background -- this whole trouble started with my Apple Magic Mouse (Series 1). I really love this mouse -- it has been with me almost 8 full years now, and yes, it still looks beautiful; and I really wanted this wonderful mouse to work in an acceptable manner in Gentoo Linux. I managed to get it to work, but somehow the scrolling and movement in X Window just does not "feel right," even if I tried tuning it with xinput.
So, being OCD'd, I tried to get the mouse to work in an acceptable manner in Windows 10 too. It does work somewhat okay-ish, using Apple's Bootcamp driver for the mouse which I got using my MacBook Pro. But the scrolling and "feel" are still somewhat "off".
Which brings us to this point. From my OCD perspective, this is all done just to get the mouse to work "right" on my setup below.
Hardware Setup:
Apple Magic Mouse (Series 1) <3 <3 <3
Asus Maximus Code IX Intel i7-7700K EVGA Nvidia GTX 1080 Hybrid ASUS Strix AMD RX 560 (purchased for testing this setup) G.Skill TridentZ DDR4-3200 16G Samsung NVMe SSD 960 EVO M.2 250GB Samsung SSD 850 PRO 256GB EVGA Supernova 850w G2 Gold Dell P4317Q 4K Monitor (43-Inch) CoolerMaster MasterKeys Pro L (Cherry MX Red) Sony Playstation 4 PRO Thermaltake Core X71 Thermaltake Water 3.0 Apple MacBook Pro
Software Setup:
The SSD 850 Pro is the drive of interest here, since it is where I store the Linux host for learning computer science and programming as a hobby. (The NVMe M.2 drive is installed with Windows 10 as my primary OS for daily use, so it is irrelevant here.)
I compiled Linux Gentoo 4.13.8 on the SSD 850 Pro as the host OS, with KVM, IOMMU, VFIO functions enabled in the kernel. I also compiled QEMU 2.10.0.
Discussion:
UPDATE:
For AMD RX 560 to work in High Sierra, all is needed is to make sure Lilu and WhateverGreen kexts are installed. This worked even without editing the AMD9500Controller.kext binary.
More importantly -- I finally got the EVGA Nvidia GTX 1080 to passthrough in Windows 10 Enterprise (free trial)!!! Sound through the Display Port of the card works perfect, as long as MessageSignaledInterruptProperties is added or changed from 0 to 1 in the Windows Registry. Sound works flawless without any lag.
Basically to get the GTX 1080 card to passthrough, I (A) compiled OVMF in Gentoo and then used the default OVMF_CODE and OVMF_VARS fd files under /usshare/edk2-ovmf/ for QEMU; and then (B) adjusted the -cpu flag in QEMU command line, such that my QEMU command line looks like this:

!/bin/bash

echo 1 > /sys/kernel/mm/ksm/run &&
qemu-system-x86_64 \ -enable-kvm \ -machine q35,type=pc,accel=kvm,kernel_irqchip=on \ -m 4G \ -cpu host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=hello \ -smp 4,sockets=1,cores=2,threads=2 \ -device vfio-pci,host=01:00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=01:00.1 \ -vga none \ -usb -device usb-host,hostbus=1,hostaddr=3 \ -usb -device usb-host,hostbus=1,hostaddr=8 \ -drive if=pflash,format=raw,readonly,file=OVMF_CODE.fd \ -drive if=pflash,format=raw,file=OVMF_VARS.fd \ -boot order=d \ -drive file=win.disk,format=raw,cache=none,aio=native \ -cdrom win10.iso \ -nographic \
Note the passthrough of the Nvidia GTX 1080 in the command line above. I did not even need to specify the Nvidia rom dump.
Using the above command line and OVMF files, I was able to boot into the Windows 10 installer to install the trial version. Everything works in Windows 10. Video is smooth and slick. Very near native.
Then, I tried to adjust the above command line for High Sierra too -- the Nvidia card passed through successfully and High Sierra (I used the installed version that was derived from the AMD card experiment detailed below, adding the NvidiaFixedUp.kext to the EFI's kext/Other folder in addition to Lilu and WhateverGreen that are already there). High Sierra was able to boot until it ended up with a stop sign.
I think with more experimentation, I can get the Nvidia card to passthrough and boot successfully into High Sierra too. Probably an issue with the config.plist file???
By the way, this is the QEMU command line I used to test the Nvidia card under High Sierra:
qemu-system-x86_64 \ -enable-kvm \ -m 4G \ -cpu Penryn,kvm=off,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,$MY_OPTIONS,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=hello \ -machine q35,type=pc,accel=kvm,kernel_irqchip=on \ -smp 4,sockets=1,cores=2,threads=2 \ -device vfio-pci,host=01:00.0,multifunction=on,x-vga=on \ -device vfio-pci,host=01:00.1 \ -device isa-applesmc,osk="" \ -drive if=pflash,format=raw,readonly,file=OVMF_CODE.fd \ -drive if=pflash,format=raw,file=OVMF_VARS.fd \ -smbios type=2 \ -device ich9-intel-hda -device hda-duplex \ -device ide-drive,bus=ide.1,drive=MacHDD \ -drive id=MacHDD,if=none,file=higher.img,format=qcow2 \ -netdev user,id=usr0 -device e1000-82545em,netdev=usr0,id=vnet0 \ -balloon none \ -vga none \ -nographic \ -device vfio-pci,host=00:14.0 \
As mentioned, the above booted with the Nvidia card passthrough -- but stops at the end of the boot screen with a stop sign. Anyone knows how to fix this?
I will try to reproduce this but boot in verbose mode instead to see what is going on. But my gut instinct is that this is very fixable.
Anyhow, I will clean up the old stuff below when I have more time. Will also do more fine tuning and perhaps test with benchmark and games. Will try to get video and screenshots posted.
And yes, when I have both High Sierra (with AMD RX 560 passed through) and Windows 10 Enterprise (with Nvidia GTX 1080) running on the Gentoo (using Intel HD 630) host, all three systems run at near-native or native speed, even though all three are running at the same time. I have not benched marked yet, but they run smooth, even all simultaneously, with videos playing. HAHAHHA.
OLDER STUFF (Read the UPDATE first):
To get High Sierra (10.13 release) working on Linux with QEMU, I followed the instructions at https://github.com/kholia/OSX-KVM. The two OVMF files (OVMF_CODE-pure-efi.fd and OVMF_VARS-pure-efi-1024x768.fd) and also the Clover.qcow2 file there all worked out of the box. All you need is to download those 3 files onto the Linux host. Then, I prepared the requisite High Sierra USB installer by using the usual USB+Clover method that most folks use to test this (select the UEFI option under Clover, not the Legacy option). After that, using the script below (commenting out the 2 VFIO GPU passthrough lines -- the lines mentioning 03:00.0 and 03.00.1 -- for now and use gtk or vnc to output video since GPU passthrough is yet to be done) to get High Sierra installed and running with "soft" video output through gtk, vnc, spice, etc. for the moment, at least until passthrough of the GPU is done later. (Was I repeating myself there? That is the trouble with OCD, there is this irresistible compulsion to do a certain thing in a specific way). I have tried many other methods, but presently it seems that the instructions in the above GitHub link are the only ones that worked.
It is a bit trickier to get High Sierra to run with GPU passthrough to obtain near-native speed. To achieve that, once I installed and booted into High Sierra with soft video output, I [a] patched the AMD9500Controller.kext in /System/Library/Extensions in the High Sierra guest's hard disk using xxd; and then [b] installed the Lilu.kext and WhateverGreen.kext into /System/Library/Extensions. It seems that the binary needed to drive the AMD RX 560 is already included in High Sierra, inside the AMD9500Controller.kext folder. All that is needed is to hex-modify the binary so that the hardware layout of the RX 560 is correctly reflected the binary file in that kext. I modified the "Acre" personality entry in the binary in AMD9500Controller.kext for convenience sake because it has 3 connectors at the back, the same number of connectors as the RX 560. (It is unclear whether [a] is needed if [b] is done -- I have not tested such scenario.)
Specifically, for example, this is what I changed in the binary based on information from the Baffin.rom file from the RX 560 card:
For the "Acre" personality located at 0x121f80 in the binary file, change the hex (of bs=48 since 3 connectors x 16=48) from
00040000040300000001010100000000
11020201000000000008000004020000
00010200000000002103050400000000
to this
00040000040300000001010111020101
00080000000200000001020021030204
04000000140200000001030010000305
All the connectors (DP, HDMI, DVI) at the back of the card should now work perfect.
At any rate, I did both [a] and then [b], and High Sierra boots successfully with AMD RX 560 passthrough, using the following Linux QEMU command line script adapted from https://github.com/kholia/OSX-KVM:

!/bin/bash

MY_OPTIONS="+aes,+xsave,+avx,+xsaveopt,avx2,+smep" export QEMU_AUDIO_DRV=alsa && qemu-system-x86_64 \ -enable-kvm \ -m 8192 \ -cpu Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,$MY_OPTIONS\ -machine pc-q35-2.9 \ -smp cpus=8,sockets=1,cores=4,threads=2 \ -device isa-applesmc,osk="" \ -drive if=pflash,format=raw,readonly,file=OVMF_CODE-pure-efi.fd \ -drive if=pflash,format=raw,file=OVMF_VARS-pure-efi-1024x768.fd \ -smbios type=2 \ -device ich9-intel-hda -device hda-duplex \ -device ide-drive,bus=ide.2,drive=Clover \ -drive id=Clover,if=none,snapshot=on,format=qcow2,file=./'Clover.qcow2' \ -device ide-drive,bus=ide.1,drive=MacHDD \ -drive id=MacHDD,if=none,file=./high.img,format=qcow2 \ -netdev user,id=usr0 -device e1000-82545em,netdev=usr0,id=vnet0 \ -balloon none \ -device vfio-pci,host=03:00.0,multifunction=on \ -device vfio-pci,host=03:00.1 \ -vga none \ -monitor unix:/tmp/monitor.sock,server,nowait \ -nographic \ -device vfio-pci,host=00:14.0,bus=pcie.0 \
Once the above is done, the separate AMDRadeonX4250.kext (responsible for 3D acceleration, etc) seems to get loaded by High Sierra and the RX 560 card should be functioning perfectly, directly connected to the guest OS with metal support.
Note that using the above command line, I have passthrough my USB controller as well (vfio-pci,host=00.14.0). The result is that bluetooth and all my USB ports (I have not tested the 3.1 one) worked out of the box, with my Apple Magic Mouse (Series 1) working perfectly with that tight, smooth buttery feel. Also working flawlessly are my CoolerMaster L keyboard and all other USB devices, including USB external drives, etc). I would then control the Linux host underneath via ssh from the High Sierra guest. When the Dell monitor is dedicated to the High Sierra guest at full 4K, the linux host basically becomes transparent and invisible to the user.
Performance-wise, High Sierra runs buttery smooth and beautifully with the AMD RX560 passthrough in QEMU KVM. It is impressively silky, fast and responsive, with QE working and no glitches or hangs or crashes. Apps open almost instantaneously (split second). Ethernet works out of the box and the sound works perfect via the audio output jack of the Dell monitor which is connected to the AMD RX 560 via Display Port. In fact, the setup is so near-native that I'd speculate that a layperson would not notice the difference compared to say, a 2017 iMac (Geekbench 4 benchmark that I ran seem to suggest similar scores) unless the setup is revealed to him. Personally and anecdotally, I do not notice the difference even if I look for them. I mean, this thing is bat-out-of-cave fast. Certainly, it is a whole different league and at a whole different level from the usual slowish virtual box, parallels, vanilla vmware experience. Even compared to my MacBook Pro (also running High Sierra), this setup feels substantially smoother, faster and more responsive.
There are only two very minor noticeable glitches. First, flac audio playback on Fidelia would intermittently "tear" for split seconds if I concurrently run very heavy compile tasks in the Linux Gentoo OS underneath the QEMU/HighSierra. But this is expected. I have not tried CPU-pinning to dedicate specific CPUs to High Sierra yet, but I suppose using CPU-pinning, the lag can be removed since High Sierra would not then have to compete for CPU with the Gentoo Linux host running underneath. Netflix video playback on Chrome/Safari runs smooth, without any lag even under heavy load. Also, the sound in High Sierra via the AMD graphics card works perfectly -- does not suffer from the slight lag as in the case of running Win 10 in QEMU with the same card passed through. Second, in the High Sierra boot screen, the progress bar under the logo would tear slightly during boot up and appear to freeze (but it is still booting underneath) for say 5-6 seconds, before booting into the login/password screen.
With more fine-tuning, I think I can get the set-up to run High Sierra even faster -- but as it is now, it feels like a native machine already. I am super, super impressed with the performance.
Windows 10:
Windows 10 Enterprise (90-trial version) also works with this QEMU KVM GPU passthrough setup. Everything works out of the box without any patching. All that is needed is for one to download and install the AMD Radeon video drivers. Performance is very smooth too and near native -- except that the audio output lags behind the video output during Netflix playback by a second or two. I feel that Win 10 in this setup is not as impressive (the "Wow" factor") as getting High Sierra to work at near-native with GPU passthrough. Both are near-native, but High Sierra just "feels" (to me anecdotally) better, tighter, more finely tuned, more "buttery" and smoother in this setup as compared to Win10 under the same. Maybe it is just a function of graphical user interface design generally. I don't really know why.
Further Experiment:
Note that all of the above is done despite the fact that I have plugged the AMD RX 560 only in the PCIE x4 slot on the motherboard. This is because I already have the Nvidia GTX 1080 installed in the PCIE x8_1 slot. The AMD RX 560 can't be placed in the x8_2 slot because I use the GTX 1080 for the Linux OS and those two x8 slots are in the same IOMMU hardware group, and hardware in the same IOMMU group cannot be passthrough to two different OSes.
Thus, theoretically, performance in such experiment above could be further improved if I were to use the Intel HD630 onboard graphics for Linux, disconnect the power to the Nvidia GTX (due to the power supply constraint -- by the way, does anyone know how to prevent a GPU card from powering up via the Asus motherboard bios?), and place the AMD RX 560 in the x8_2 slot for the passthrough. Additionally, I can try moving the Linux host to the NVMe .M2 drive for testing for a significant disk speed improvement.
A much more interesting next step would be to procure a second AMD RX 560 card, and place it in a x4 slot, with the first AMD RX 560 card in the x8_2 slot, have Linux running on the Intel HD 630, and then passthrough both AMD cards to High Sierra and Win 10 simultaneously, such that Linux Gentoo, High Sierra and Win10 all run on the same machine at the same time and all at native or near native-speed!
Alternatively, if anyone here knows how to get the Nvidia GTX 1080 to passthrough to High Sierra or Windows via QEMU, I would appreciate it if you could share your knowledge! I have tried to do so many times, but it all ends in black screen with the Nvidia card.
I intend to achieve this as the final goal of this experiment -- with the Dell P4317Q displaying the 4 machines all at the same time -- four split screens at 1920x1080 resolution each, each split screen for each of linux, high sierra, windows, and also my PS 4 Pro.
I also wonder, what happens if I install Gentoo Linux on my MacBook Pro, compile and run QEMU KVM on it, and then passthrough the GPU to the High Sierra guest? Without X Window running (perhaps ssh'ing in to control), I'd speculate that the Linux overhead would be relatively small.
Conclusion:
With all this running on top of Gentoo, there's basically no need to reboot or to troubleshoot incompatible hardware. There are no crashes in the host or the guest though sometimes the guest fails to fire up. Also, backup of guest OSes seems easy -- I just copy the qcow2 hard disk file into storage. And I can fire up and power off the guest anytime I wish without interfering with the Gentoo host running underneath, which has basically becomes transparent. Also, this setup seems, in theory, more resistant to problems when upgrading the OS. And by the way, the messages app seems to work out of the box too.
It is interesting to note that with KVM allowing guest to access many aspects of the host hardware directly, in addition to passing through control of the GPU and USB controllers, discs, other devices etc. to the guest directly, plus pinning CPUs to the guest, the guest is arguably a hybrid of hardware and software, rather than just software. In fact, to me at least, it feels more bare metal than software.
Feel the POWER of Gentoo.
Disclaimer:
All for fair-use, learning, experimental testing only
Screenshots:
https://imgur.com/lxjIFUV (High Sierra GPU Passthrough at full 4K glory)
https://imgur.com/Y66Yd8a (High Sierra + Linux + PS4 Pro)
https://imgur.com/yOSIQIg (Win 10 + Linux + PS4 Pro)
https://imgur.com/y6IgTAm (Apple Magic Mouse (Series 1))
Credit:
D. Kholia (https://github.com/kholia)
submitted by rev0lt001 to VFIO [link] [comments]

Free Download Binary Option Bot- Robot// Auto Trading ... #07 - How To Emulate Firmware With QEMU - Hardware Hacking Tutorial Binary Options Doctor  Binary Options Strategy & Trading ... binäre optionen template kostenlos, binäre optionen ... 2 Minutes Strategy Binary Options 2020 (IQ Options) - YouTube Install the Junos Olive Image into GNS3 (Qemu)

qemu-arm -L <prefix> <binary> qemu-<arch> -L <prefix> <binary> The -L option is important for when the binary links to external dependencies such as uCLibc or encryption libraries. It tells the dynamic linker to look for dependencies with the provided prefix. Below is an example of running the imgdecrypt binary for the D-Link DIR-882 router. Gentoo: emerge --ask app-emulation/qemu. RHEL/CentOS: yum install qemu-kvm. SUSE: zypper install qemu. macOS. QEMU can be installed from Homebrew: brew install qemu. QEMU can be installed from MacPorts: sudo port install qemu. QEMU requires Mac OS X 10.5 or later, but it is recommended to use Mac OS X 10.7 or later. Windows Step 1: Download Qemu for Windows. From the official website of the Qemu, we can download it easily even the source code. Visit it and click on the Windows tab, it will take you to another page https://qemu.weilnetz.de to download 32 bit or 64 bit of this virtualization platform. Here we are getting the 64 bit. Step 2: Install Qemu Now, like any other Windows 10/7 software, just double click ... Display options. There are a few available options to specify the kind of display to use in QEMU. -display sdl - Display video output via SDL (usually in a separate graphics window).-display curses - Displays video output via curses.-display none - Do not display video output. This option is different than the -nographic option. See the man page for more information. QEMU is a generic and open source machine emulator and virtualizer. Full-system emulation. Run operating systems for any machine, on any supported architecture. User-mode emulation. Run programs for another Linux/BSD target, on any supported architecture. Virtualization. Run KVM and Xen virtual machines with near native performance. QEMU is a member of Software Freedom Conservancy. Latest ... GNU Guix 1.1.0 QEMU Image. QCOW2 virtual machine (VM) image. Download options: x86_64 . Signatures: x86_64 . Installation instructions. GNU Guix 1.1.0 Binary. Self-contained tarball providing binaries for Guix and its dependencies, to be installed on top of your Linux-based system. Download options: x86_64 i686 armhf aarch64 . Signatures: x86_64 i686 armhf aarch64 . Installation instructions ... Removed compiler option -fstack-protector-all. This reduces the code size and might improve the performance a little bit. 2013-11-18: New QEMU installers (1.7.0-rc0). The system emulations now support curses. Keyboard input in GTK should be fixed. 2013-08-17: New QEMU installers. Added experimental system emulation for Raspberry Pi (based on code from Gregory Estrade). 2013-06-16: New QEMU (1 ...

[index] [7791] [6170] [28261] [1653] [2115] [25680] [22459] [27836] [29217] [4709]

Free Download Binary Option Bot- Robot// Auto Trading ...

IQ Options -https://affiliate.iqoption.com/redir/...Please subscribe and leave a like for more videos.Online trading is a very risky investment/profession. It i... based on - Free Download Binary Option Bot- Robot// Auto Trading Signal Software 2019 hindi -----... We will talk about using QEMU as an emulation environment, reasonably similar to our device, where to run, debug, and reverse engineer interesting device executable binaries. "QEMU", can "Quick ... Menyiapkan Lab: Cara Install Junos Olive di GNS3 ## Link Download ## → junos-olive.qcow2: https://www.webiptek.com/2020/06/download-juniper-vmx-vsrx-vqfx-fre... binäre optionen template kostenlos, binäre optionen signale kostenlos http://bitlye.com/7k7yHv Die Crypto Trader ist eine Gruppe, die ausschließlich Leute... Binary Options Doctor will act as your guide to success in the Trading Industry to keep you safe from Scam Softwares and Unregulated Brokers who often shut t... Binary Options Real Account 2020 - Download Interceptor Binary Download Interceptor Binary at https://ultimatefxtools.com Download IQ Option Trading Platform...

http://arab-binary-option.baracallne.gq