Quantcast
Channel: Noise
Viewing all 42165 articles
Browse latest View live

Chef becomes 100% free software

$
0
0

Post Syndicated from corbet original https://lwn.net/Articles/784627/rss

Chef, the purveyor of a popular configuration-management system, has announced
a move away from the open-core business model and the open-sourcing of all
of its software. “We aren’t making this change lightly. Over the
years we have experimented with and learned from a variety of different
open source, community and commercial models, in search of the right
balance. We believe that this change, and the way we have made it, best
aligns the objectives of our communities with our own business
objectives. Now we can focus all of our investment and energy on building
the best possible products in the best possible way for our community
without having to choose between what is ‘proprietary’ and what is ‘in the
commons.’


Security updates for Tuesday

$
0
0

Post Syndicated from ris original https://lwn.net/Articles/784665/rss

Security updates have been issued by CentOS (firefox, libssh2, and thunderbird), Debian (firmware-nonfree, kernel, and libssh2), Fedora (drupal7, flatpak, and mod_auth_mellon), Gentoo (burp, cairo, glusterfs, libical, poppler, subversion, thunderbird, and unbound), openSUSE (yast2-rmt), Red Hat (freerdp), and SUSE (bash, ed, libarchive, ntp, and sqlite3).

How to run AWS CloudHSM workloads on Docker containers

$
0
0

Post Syndicated from Mohamed AboElKheir original https://aws.amazon.com/blogs/security/how-to-run-aws-cloudhsm-workloads-on-docker-containers/

AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.

CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (KMS) and AWS Certificate Manager Private Certificate Authority (ACM PCA). KMS and ACM PCA are fully managed services that are easy to use and integrate. You’ll generally use AWS CloudHSM only if your workload needs a single-tenant HSM under your own control, or if you need cryptographic algorithms that aren’t available in the fully-managed alternatives.

CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), or Microsoft CryptoNG (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to all HSMs in your cluster. The CloudHSM client runs as a daemon, locally on the same Amazon Elastic Compute Cloud (EC2) instance or server as your applications.

The deployment process is straightforward if you’re running your application directly on your compute resource. However, if you want to deploy applications using the HSMs in containers, you’ll need to make some adjustments to the installation and execution of your application and the CloudHSM components it depends on. Docker containers don’t typically include access to an init process like systemd or upstart. This means that you can’t start the CloudHSM client service from within the container using the general instructions provided by CloudHSM. You also can’t run the CloudHSM client service remotely and connect to it from the containers, as the client daemon listens to your application using a local Unix Domain Socket. You cannot connect to this socket remotely from outside the EC2 instance network namespace.

This blog post discusses the workaround that you’ll need in order to configure your container and start the client daemon so that you can utilize CloudHSM-based applications with containers. Specifically, in this post, I’ll show you how to run the CloudHSM client daemon from within a Docker container without needing to start the service. This enables you to use Docker to develop, deploy and run applications using the CloudHSM software libraries, and it also gives you the ability to manage and orchestrate workloads using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Jenkins.

Solution overview

My solution shows you how to create a proof-of-concept sample Docker container that is configured to run the CloudHSM client daemon. When the daemon is up and running, it runs the AESGCMEncryptDecryptRunner Java class, available on the AWS CloudHSM Java JCE samples repo. This class uses CloudHSM to generate an AES key, then it uses the key to encrypt and decrypt randomly generated data.

Note: In my example, you must manually enter the crypto user (CU) credentials as environment variables when running the container. For any production workload, you’ll need to carefully consider how to provide, secure, and automate the handling and distribution of your HSM credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials for your application and security needs.

Figure 1: Architectural diagram

Figure 1: Architectural diagram

Prerequisites

To implement my solution, I recommend that you have basic knowledge of the below:

  • CloudHSM
  • Docker
  • Java

Here’s what you’ll need to follow along with my example:

  1. An active CloudHSM cluster with at least one active HSM. You can follow the Getting Started Guide to create and initialize a CloudHSM cluster. (Note that for any production cluster, you should have at least two active HSMs spread across Availability Zones.)
  2. An Amazon Linux 2 EC2 instance in the same Amazon Virtual Private Cloud in which you created your CloudHSM cluster. The EC2 instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control access to the HSMs. You can learn about attaching security groups to allow EC2 instances to connect to your HSMs in our online documentation.
  3. A CloudHSM crypto user (CU) account created on your HSM. You can create a CU by following these user guide steps.

Solution details

  1. On your Amazon Linux EC2 instance, install Docker:
    
            # sudo yum -y install docker
            
  2. Start the docker service:
    
            # sudo service docker start
            
  3. Create a new directory and step into it. In my example, I use a directory named “cloudhsm_container.” You’ll use the new directory to configure the Docker image.
    
            # mkdir cloudhsm_container
            # cd cloudhsm_container           
            
  4. Copy the CloudHSM cluster’s CA certificate (customerCA.crt) to the directory you just created. You can find the CA certificate on any working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. This certificate is created during initialization of the CloudHSM Cluster and is needed to connect to the CloudHSM cluster.
  5. In your new directory, create a new file with the name run_sample.sh that includes the contents below. The script starts the CloudHSM client daemon, waits until the daemon process is running and ready, and then runs the Java class that is used to generate an AES key to encrypt and decrypt your data.
    
            #! /bin/bash
    
            # start cloudhsm client
            echo -n "* Starting CloudHSM client ... "
            /opt/cloudhsm/bin/cloudhsm_client /opt/cloudhsm/etc/cloudhsm_client.cfg &> /tmp/cloudhsm_client_start.log &
            
            # wait for startup
            while true
            do
                if grep 'libevmulti_init: Ready !' /tmp/cloudhsm_client_start.log &> /dev/null
                then
                    echo "[OK]"
                    break
                fi
                sleep 0.5
            done
            echo -e "\n* CloudHSM client started successfully ... \n"
            
            # start application
            echo -e "\n* Running application ... \n"
            
            java -ea -Djava.library.path=/opt/cloudhsm/lib/ -jar target/assembly/aesgcm-runner.jar --method environment
            
            echo -e "\n* Application completed successfully ... \n"                      
            
  6. In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
    • The AWS CloudHSM client package.
    • The AWS CloudHSM Java JCE package.
    • OpenJDK 1.8. This is needed to compile and run the Java classes and JAR files.
    • Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
    • The AWS CloudHSM Java JCE samples that will be downloaded and built.
  7. Cut and paste the contents below into Dockerfile.

    Note: Make sure to replace the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.

    
            # Use the amazon linux image
            FROM amazonlinux:2
            
            # Install CloudHSM client
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-latest.el7.x86_64.rpm
            
            # Install CloudHSM Java library
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-jce-latest.el7.x86_64.rpm
            
            # Install Java, Maven, wget, unzip and ncurses-compat-libs
            RUN yum install -y java maven wget unzip ncurses-compat-libs
            
            # Create a work dir
            WORKDIR /app
            
            # Download sample code
            RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/master.zip
            
            # unzip sample code
            RUN unzip master.zip
            
            # Change to the create directory
            WORKDIR aws-cloudhsm-jce-examples-master
            
            # Build JAR files
            RUN mvn validate && mvn clean package
            
            # Set HSM IP as an environmental variable
            ENV HSM_IP <insert the IP address of an active CloudHSM instance here>
            
            # Configure cloudhms-client
            COPY customerCA.crt /opt/cloudhsm/etc/
            RUN /opt/cloudhsm/bin/configure -a $HSM_IP
            
            # Copy the run_sample.sh script
            COPY run_sample.sh .
            
            # Run the script
            CMD ["bash","run_sample.sh"]                        
            
  8. Now you’re ready to build the Docker image. Use the following command, with the name jce_sample_client. This command will let you use the Dockerfile you created in step 6 to create the image.
    
            # sudo docker build -t jce_sample_client .
            
  9. To run a Docker container from the Docker image you just created, use the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, refer to the steps in the CloudHSM user guide.)
    
            # sudo docker run --env HSM_PARTITION=PARTITION_1 \
            --env HSM_USER=<user> \
            --env HSM_PASSWORD=<password> \
            jce_sample_client
            

    If successful, the output should look like this:

    
            * Starting cloudhsm-client ... [OK]
            
            * cloudhsm-client started successfully ...
            
            * Running application ...
            
            ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors 
            to the console.
            70132FAC146BFA41697E164500000000
            Successful decryption
                SDK Version: 2.03
            
            * Application completed successfully ...          
            

Conclusion

My solution provides an example of how to run CloudHSM workloads on Docker containers. You can use it as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to AWS CloudHSM without compromising on the flexibility that Docker provides for developing, deploying, and running applications. If you have comments about this post, submit them in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir joined AWS in September 2017 as a Security CSE (Cloud Support Engineer) based in Cape Town. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

Backblaze’s Must See List for NAB 2019

$
0
0

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/what-not-to-miss-nab2019/

Collage of logos from Backblaze B2 cloud storage partners

With NAB 2019 only days away, the Backblaze team is excited to launch into the world’s largest event for creatives, and our biggest booth yet!

Must See — Backblaze Booth

This year we’ll be celebrating some of the phenomenal creative work by our customers, including American Public Television, Crisp Video, Falcons’ Digital Creative, WunderVu, and many more.

We’ll have workflow experts standing by to chat with you about your workflow frustrations, and how Backblaze B2 Cloud Storage can be the key to unlocking efficiency and solving storage challenges throughout your entire workflow: From Action! To Archive. With B2, you can focus on creating and managing content, not managing storage.

Create: Bring Your Story to Life

Stop by our booth and we can show you how you can protect your content from ingest through work-in-process by syncing seamlessly to the cloud. We can also detail how you can improve team collaboration and increase content reuse by organizing your content with one of our MAM integrations.

Distribute: Share Your Story With the World

Our experts can show you how B2 can help you scale your content library instantly and indefinitely, and avoid the hassle and expense of on-premises storage. We can demonstrate how everything in your content library can be served directly from your B2 account or through our content delivery partners like Cloudflare.

Preserve: Make Sure Your Story Lives Forever

Want to see the math behind the first cloud storage that’s more affordable than LTO? We can step through the numbers. We can also show you how B2 will keep your archived content accessible, anytime, and anywhere, through a web browser, API calls, or one of our integrated applications listed below.

Must See — Workflow Integrations You Can Count On

Our fantastic workflow partners are a critical part of your creative workflow backed by Backblaze — and there’s a lot of partner news to catch up on!

Drop by our booth to pick up a handy map to help you find Backblaze partners on the show floor including:

Backup and Archive Workflow Integrations

Archiware P5, booth SL15416
SyncBackPro, Wynn Salon — J

File Transfer Acceleration, Data Wrangling, Data Movement

FileCatalyst, booth SL12116
Hedge, booth SL14805

Asset and Collaboration Managers

axle ai, booth SL15116
Cantemo iconik, booth SL6021
Cantemo (Portal), booth SL6021
CatDV, booth SL5421
Cubix (Ortana Media Group), booth SL5922
eMAM, booth SL10224

Workflow Storage

Facilis, booth SL6321
GB Labs, booth SL5324
ProMAX, booth SL6313
Scale Logic, booth SL11109
Tiger Technology, booth SL8505
QNAP, booth SL15716
Seagate, booth SL8511
StorageDNA, booth SL11810

Must See — Backblaze Events during NAB

Monday morning we’re delivering a presentation in the Scale Logic Knowledge Zone, and Tuesday night of NAB we’re honored to help sponsor the all-new Faster Together event that replaces the long-standing Las Vegas Creative User Supermeet event.

We’ll be raffling off a Hover2 4K drone powered by AI to help you get that perfect drone shot for your next creative film! So after the NAB show wraps up on Tuesday, head over to the Rio main ballroom for a night of mingling with creatives and amazing talks by some of the top editors, colorists, and VFX artists in the industry.

ProVideoTech and Backblaze at Scale Logic Knowledge Zone
Monday April 8 at 11 AM
Scale Logic Knowledge Zone, NAB Booth SL111109
Monday of NAB, Backblaze and PVT will deliver a live presentation for NAB attendees on how to build hybrid-cloud workflows with Cantemo and Backblaze.
Scale Logic Media Management Knowledge Zone

Backblaze at The Faster Together Stage
Tuesday, April 9
Rio Las Vegas Hotel and Casino
Doors open at 4:30 PM, stage presentations begin at 7:00 PM
Reserve Tickets for the Faster Together event

If you haven’t yet, be sure to sign up and reserve your meeting time with the Backblaze team, and add us to your Map My Show NAB plan and we’ll see you there!

  NAB 2019 is just a few days away. NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post Backblaze’s Must See List for NAB 2019 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

VMware Suit Concludes in Germany

$
0
0

Post Syndicated from ris original https://lwn.net/Articles/784673/rss

Software Freedom Conservancy reports
that the Hamburg Higher Regional Court affirmed the lower court’s decision,
which dismissed Christoph Hellwig’s case against VMWare in
Germany. Hellwig will not pursue the case further in German courts.

Conservancy’s staff also spent a significant amount of time and resources
at each stage of the proceedings — most recently, analyzing what this
ruling could mean for future enforcement actions. The German court made a
final decision in this case on procedure and standing, not on
substance. While we are disappointed that the courts did not take the
opportunity to deliver a clear pro-software-freedom ruling, this ruling
does not set precedent and the implications of the decision are
limited. This matter certainly would proceed differently with different
presentation of plaintiffs or in another jurisdiction.

In addition to VMware committing to removing vmklinux from their kernel, this case also succeeded in sparking significant discussion about the community-wide implications for free software when some companies playing by the rules while others continually break them. Our collective insistence, that licensing terms are not optional, has now spurred other companies to take copyleft compliance more seriously. The increased focus on respecting licenses post-lawsuit and providing source code for derivative works — when coupled with VMware’s reluctant but eventual compliance — is a victory, even if we must now look to other jurisdictions and other last-resort legal actions to adjudicate the question of the GPL and derivative works of Linux.

The Debian Project mourns the loss of Innocent de Marchi

$
0
0

Post Syndicated from ris original https://lwn.net/Articles/784677/rss

The Debian Project sadly announced the passing of Innocent de Marchi. “Innocent was a math teacher and a free software developer. One of his
passions was tangram puzzles, which led him to write a tangram-like game
that he later packaged and maintained in Debian. Soon his contributions
expanded to other areas, and he also worked as a tireless translator
into Catalan.

[$] Program names and “pollution”

$
0
0

Post Syndicated from jake original https://lwn.net/Articles/784508/rss

A Linux user’s $PATH likely contains well over a thousand different
commands that were installed by various packages. It’s not immediately
obvious which package is responsible for a command with
a generic name, like createuser. There are ways to figure it out, of
course, but perhaps it would make sense for packages like PostgreSQL, which
is responsible for createuser, to give their commands names that
are less generic—and more easily disambiguated—such as
pg_createuser. But renaming commands down the road has “backward
compatibility problems”
written all over it, as a recent discussion on the pgsql-hackers mailing
list shows.

Walkthrough for Portable Services in Go

$
0
0

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/walkthrough-for-portable-services-in-go.html

Portable Services Walkthrough (Go Edition)

A few months ago I posted a blog story with a walkthrough of systemd
Portable
Services
. The
example service given was written in C, and the image was built with
mkosi. In this blog story I’d
like to revisit the exercise, but this time focus on a different
aspect: modern programming languages like Go and Rust push users a lot
more towards static linking of libraries than the usual dynamic
linking preferred by C (at least in the way C is used by traditional
Linux distributions).

Static linking means we can greatly simplify image building: if we
don’t have to link against shared libraries during runtime we don’t
have to include them in the portable service image. And that means
pretty much all need for building an image from a Linux distribution
of some kind goes away as we’ll have next to no dependencies that
would require us to rely on a distribution package manager or
distribution packages. In fact, as it turns out, we only need as few
as three files in the portable service image to be fully functional.

So, let’s have a closer look how such an image can be put
together. All of the following is available in this git
repository
.

A Simple Go Service

Let’s start with a simple Go service, an HTTP service that simply
counts how often a page from it is requested. Here are the sources:
main.go
— note that I am not a seasoned Go programmer, hence please be
gracious.

The service implements systemd’s socket activation protocol, and thus
can receive bound TCP listener sockets from systemd, using the
$LISTEN_PID and $LISTEN_FDS environment variables.

The service will store the counter data in the directory indicated in
the $STATE_DIRECTORY environment variable, which happens to be an
environment variable current systemd versions set based on the
StateDirectory=
setting in service files.

Two Simple Unit Files

When a service shall be managed by systemd a unit file is
required. Since the service we are putting together shall be socket
activatable, we even have two:
portable-walkthrough-go.service
(the description of the service binary itself) and
portable-walkthrough-go.socket
(the description of the sockets to listen on for the service).

These units are not particularly remarkable: the .service file
primarily contains the command line to invoke and a StateDirectory=
setting to make sure the service when invoked gets its own private
state directory under /var/lib/ (and the $STATE_DIRECTORY
environment variable is set to the resulting path). The .socket file
simply lists 8088 as TCP/IP port to listen on.

An OS Description File

OS images (and that includes portable service images) generally should
include an
os-release
file. Usually, that is provided by the distribution. Since we are
building an image without any distribution let’s write our own
version of such a
file
. Later
on we can use the portablectl inspect command to have a look at this
metadata of our image.

Putting it All Together

The four files described above are already every file we need to build
our image. Let’s now put the portable service image together. For that
I’ve written a
Makefile. It
contains two relevant rules: the first one builds the static binary
from the Go program sources. The second one then puts together a
squashfs file system combining the following:

  1. The compiled, statically linked service binary
  2. The two systemd unit files
  3. The os-release file
  4. A couple of empty directories such as /proc/, /sys/, /dev/
    and so on that need to be over-mounted with the respective kernel
    API file system. We need to create them as empty directories here
    since Linux insists on directories to exist in order to over-mount
    them, and since the image we are building is going to be an
    immutable read-only image (squashfs) these directories cannot be
    created dynamically when the portable image is mounted.
  5. Two empty files /etc/resolv.conf and /etc/machine-id that can
    be over-mounted with the same files from the host.

And that’s already it. After a quick make we’ll have our portable
service image portable-walkthrough-go.raw and are ready to go.

Trying it out

Let’s now attach the portable service image to our host system:

# portablectl attach ./portable-walkthrough-go.raw
(Matching unit files with prefix 'portable-walkthrough-go'.)
Created directory /etc/systemd/system.attached.
Created directory /etc/systemd/system.attached/portable-walkthrough-go.socket.d.
Written /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf.
Copied /etc/systemd/system.attached/portable-walkthrough-go.socket.
Created directory /etc/systemd/system.attached/portable-walkthrough-go.service.d.
Written /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf.
Created symlink /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf → /usr/lib/systemd/portable/profile/default/service.conf.
Copied /etc/systemd/system.attached/portable-walkthrough-go.service.
Created symlink /etc/portables/portable-walkthrough-go.raw → /home/lennart/projects/portable-walkthrough-go/portable-walkthrough-go.raw.

The portable service image is now attached to the host, which means we
can now go and start it (or even enable it):

# systemctl start portable-walkthrough-go.socket

Let’s see if our little web service works, by doing an HTTP request on port 8088:

# curl localhost:8088
Hello! You are visitor #1!

Let’s try this again, to check if it counts correctly:

# curl localhost:8088
Hello! You are visitor #2!

Nice! It worked. Let’s now stop the service again, and detach the image again:

# systemctl stop portable-walkthrough-go.service portable-walkthrough-go.socket
# portablectl detach portable-walkthrough-go
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/10-profile.conf.
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d/20-portable.conf.
Removed /etc/systemd/system.attached/portable-walkthrough-go.service.d.
Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.
Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d/20-portable.conf.
Removed /etc/systemd/system.attached/portable-walkthrough-go.socket.d.
Removed /etc/portables/portable-walkthrough-go.raw.
Removed /etc/systemd/system.attached.

And there we go, the portable image file is detached from the host again.

A Couple of Notes

  1. Of course, this is a simplistic example: in real life services will
    be more than one compiled file, even when statically linked. But
    you get the idea, and it’s very easy to extend the example above to
    include any additional, auxiliary files in the portable service
    image.

  2. The service is very nicely sandboxed during runtime: while it runs
    as regular service on the host (and you thus can watch its logs or
    do resource management on it like you would do for all other
    systemd services), it runs in a very restricted environment under a
    dynamically assigned UID that ceases to exist when the service is
    stopped again.

  3. Originally I wanted to make the service not only socket activatable
    but also implement exit-on-idle, i.e. add a logic so that the
    service terminates on its own when there’s no ongoing HTTP
    connection for a while. I couldn’t figure out how to do this
    race-freely in Go though, but I am sure an interested reader might
    want to add that? By combining socket activation with exit-on-idle
    we can turn this project into an excercise of putting together an
    extremely resource-friendly and robust service architecture: the
    service is started only when needed and terminates when no longer
    needed. This would allow to pack services at a much higher density
    even on systems with few resources.

  4. While the basic concepts of portable services have been around
    since systemd 239, it’s best to try the above with systemd 241 or
    newer since the portable service logic received a number of fixes
    since then.

Further Reading

A low-level document introducing Portable Services is shipped along
with systemd
.

Please have a look at the blog story from a few months
ago

that did something very similar with a service written in C.

There are also relevant manual pages:
portablectl(1)
and
systemd-portabled(8).


Percentage Styles

Grafana v6.1 Released

$
0
0

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2019/04/03/grafana-v6.1-released/

v6.1 Stable released!

A few weeks have passed since the excitement of the major Grafana 6.0 release during GrafanaCon, which means it’s time for a new Grafana release. Grafana 6.1 iterates on the permissions system to allow for teams to be more self-organizing. It also includes a feature for Prometheus that enables a more exploratory workflow for dashboards.

What’s New in Grafana v6.1

Download Grafana 6.1 Now

Ad hoc filtering for Prometheus

The ad hoc filter feature allows you to create new key/value filters on the fly with autocomplete for both keys and values. The filter condition is then automatically applied to all queries on the dashboard. This makes it easier to explore your data in a dashboard without changing queries and without having to add new template variables.

Other timeseries databases with label-based query languages have had this feature for a while. Prometheus recently added support for fetching label names from its API, and thanks to Mitsuhiro Tanda’s work implementing it in Grafana, the Prometheus datasource finally supports ad hoc filtering.

Support for fetching label names was released in Prometheus v2.6.0, so that is a requirement for this feature to work in Grafana.

Editors can own dashboards, folders, and teams they create

When the dashboard folders feature and permissions system were released in Grafana 5.0, users with the editor role were not allowed to administer dashboards, folders, or teams. In the 6.1 release, we have added a config option that can change the default permissions so that editors are admins for any dashboard, folder, or team they create.

This feature also adds a new team permission that can be assigned to any user with the editor or viewer role and enables that user to add other users to the team.

We believe that this is more in line with the Grafana philosophy, as it will allow teams to be more self-organizing. This option will be made permanent if it gets positive feedback from the community, so let us know what you think in the issue on GitHub.

To turn this feature on, add the following config option to your Grafana ini file in the users section, and then restart the Grafana server:

[users]
editors_can_admin = true

List and revoke user auth tokens in the API

As the first step toward a feature that would enable you to list a user’s signed-in devices/sessions and to log out those devices from the Grafana UI, support has been added to the API to list and revoke user authentication tokens.

Minor Features and Fixes

This release contains a lot of small features and fixes:

  • A new keyboard shortcut d l toggles all graph legends in a dashboard.
  • A small bug fix for Elasticsearch – template variables in the alias field now work properly.
  • Some new capabilities have been added for datasource plugins that will be of interest to plugin authors:
    • There’s a new oauth pass-through option.
    • It’s now possible to add user details to requests sent to the dataproxy.
  • Heatmap and Explore fixes.
  • The Prometheus range query alignment was moved down by one interval. If you have added an offset to your queries to compensate for alignment issues, you can now safely remove it.

Changelog

Check out the CHANGELOG.md file for a complete list of new features, changes, and bug fixes.

Download

Head to the download page for download links & instructions.

Thanks

A big thanks to all the Grafana users who contribute by submitting PRs, bug reports, and feedback!

Eight years, 2000 blog posts

$
0
0

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/eight-years-2000-blog-posts/

Today’s a bit of a milestone for us: this is the 2000th post on this blog.

Why does a computer company have a blog? When did it start, who writes it, and where does the content come from? And don’t you have sore fingers? All of these are good questions: I’m here to answer them for you.

The first ever Raspberry Pi blog post

Marital circumstances being what they are, I had a front-row view of everything that was going on at Raspberry Pi, right from the original conversations that kicked the project off in 2009. In 2011, when development was still being done on Eben’s and my kitchen table, we met with sudden and slightly alarming fame when Rory Cellan Jones from the BBC shot a short video of a prototype Raspberry Pi and blogged about it – his post went viral. I was working as a freelance journalist and editor at the time, but realised that we weren’t going to get a better chance to kickstart a community, so I dropped my freelance work and came to work full-time for Raspberry Pi.

Setting up an instantiation of WordPress so we could talk to all Rory’s readers, each of whom decided we’d promised we’d make them a $25 computer, was one of the first orders of business. We could use the WordPress site to announce news, and to run a sort of devlog, which is what became this blog; back then, many of our blog posts were about the development of the original Raspberry Pi.

It was a lovely time to be writing about what we do, because we could be very open about the development process and how we were moving towards launch in a way that sadly, is closed to us today. (If we’d blogged about the development of Raspberry Pi 3 in the detail we’d blogged about Raspberry Pi 1, we’d not only have been handing sensitive and helpful commercial information to the large number of competitor organisations that have sprung up like mushrooms since that original launch; but you’d also all have stopped buying Pi 2 in the run-up, starving us of the revenue we need to do the development work.)

Once Raspberry Pis started making their way into people’s hands in early 2012, I realised there was something else that it was important to share: news about what new users were doing with their Pis. And I will never, ever stop being shocked at the applications of Raspberry Pi that you come up with. Favourites from over the years? The paludarium’s still right up there (no, I didn’t know what a paludarium was either when I found out about it); the cucumber sorter’s brilliant; and the home-brew artificial pancreas blows my mind. I’ve a particular soft spot for musical projects (which I wish you guys would comment on a bit more so I had an excuse to write about more of them).



As we’ve grown, my job has grown too, so I don’t write all the posts here like I used to. I oversee press, communications, marketing and PR for Raspberry Pi Trading now, working with a team of writers, editors, designers, illustrators, photographers, videographers and managers – it’s very different from the days when the office was that kitchen table. Alex Bate, our magisterial Head of Social Media, now writes a lot of what you see on this blog, but it’s always a good day for me when I have time to pitch in and write a post.

I’d forgotten some of the early stuff before looking at 2011’s blog posts to jog my memory as I wrote today’s. What were we thinking when we decided to ship without GPIO pins soldered on? (Happily for the project and for the 25,000,000 Pi owners all over the world in 2019, we changed our minds before we finally launched.) Just how many days in aggregate did I spend stuffing envelopes with stickers at £1 a throw to raise some early funds to get the first PCBs made? (I still have nightmares about the paper cuts.) And every time I think I’m having a bad day, I need to remember that this thing happened, and yet everything was OK again in the end. (The backs of my hands have gone all prickly just thinking about it.) Now I think about it, the Xenon Death Flash happened too. We also survived that.

At the bottom of it all, this blog has always been about community. It’s about sharing what we do, what you do, and making links between people all over the world who have this little machine in common. The work you do telling people about Raspberry Pi, putting it into your own projects, and supporting us by buying the product doesn’t just help us make hardware: every penny we make funds the Raspberry Pi Foundation’s charitable work, helps kids on every continent to learn the skills they need to make their own futures better, and, we think, makes the world a better place. So thank you. As long as you keep reading, we’ll keep writing.

The post Eight years, 2000 blog posts appeared first on Raspberry Pi.

How Political Campaigns Use Personal Data

$
0
0

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/how_political_c.html

Really interesting report from Tactical Tech.

Data-driven technologies are an inevitable feature of modern political campaigning. Some argue that they are a welcome addition to politics as normal and a necessary and modern approach to democratic processes; others say that they are corrosive and diminish trust in already flawed political systems. The use of these technologies in political campaigning is not going away; in fact, we can only expect their sophistication and prevalence to grow. For this reason, the techniques and methods need to be reviewed outside the dichotomy of ‘good’ or ‘bad’ and beyond the headlines of ‘disinformation campaigns’.

All the data-driven methods presented in this guide would not exist without the commercial digital marketing and advertising industry. From analysing behavioural data to A/B testing and from geotargeting to psychometric profiling, political parties are using the same techniques to sell political candidates to voters that companies use to sell shoes to consumers. The question is, is that appropriate? And what impact does it have not only on individual voters, who may or may not be persuad-ed, but on the political environment as a whole?

The practice of political strategists selling candidates as brands is not new. Vance Packard wrote about the ‘depth probing’ techniques of ‘political persuaders’ as early as 1957. In his book, ‘The Hidden Persuaders’, Packard described political strategies designed to sell candidates to voters ‘like toothpaste’, and how public relations directors at the time boasted that ‘scientific methods take the guesswork out of politics’.5 In this sense, what we have now is a logical progression of the digitisation of marketing techniques and political persuasion techniques.

Security updates for Wednesday

$
0
0

Post Syndicated from ris original https://lwn.net/Articles/784806/rss

Security updates have been issued by Debian (apache2), Fedora (edk2 and tomcat), openSUSE (ansible, ghostscript, lftp, libgxps, libjpeg-turbo, libqt5-qtimageformats, libqt5-qtsvg, libssh2_org, openssl-1_0_0, openwsman, pdns, perl-Email-Address, putty, python-azure-agent, python-cryptography, python-pyOpenSSL, python-Flask, thunderbird, tor, unzip, and wireshark), Scientific Linux (freerdp), Slackware (wget), SUSE (bluez, file, firefox, libsndfile, netpbm, thunderbird, and xen), and Ubuntu (busybox, firebird2.5, kernel, linux, linux-aws, linux-azure, linux-gcp, linux-kvm, linux-raspi2, linux, linux-aws, linux-kvm, linux-raspi2, linux-snapdragon, linux-hwe, linux-aws-hwe, linux-azure, linux-gcp, linux-oracle, linux-hwe, linux-azure, linux-lts-trusty, linux-lts-xenial, linux-aws, linux-raspi2, and policykit-1).

A set of stable kernels

AWS Security releases IoT security whitepaper

$
0
0

Post Syndicated from Momena Cheema original https://aws.amazon.com/blogs/security/aws-security-releases-iot-security-whitepaper/

We’ve published a whitepaper, Securing Internet of Things (IoT) with AWS, to help you understand and address data security as it relates to your IoT devices and the data generated by them. The whitepaper is intended for a broad audience who is interested in learning about AWS IoT security capabilities at a service-specific level and for compliance, security, and public policy professionals.

IoT technologies connect devices and people in a multitude of ways and are used across industries. For example, IoT can help manage thermostats remotely across buildings in a city, efficiently control hundreds of wind turbines, or operate autonomous vehicles more safely. With all of the different types of devices and the data they transmit, security is a top concern.

The specific challenges using IoT technologies present has piqued the interest of governments worldwide who are currently assessing what, if any, new regulatory requirements should take shape to keep pace with IoT innovation and the general problem of securing data. This whitepaper uses a specific example to cover recent developments published by the National Institute of Standards and Technology (NIST) and the United Kingdom’s Code of Practice that are specific to IoT.

If you have questions or want to learn more, contact your account executive, or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Momena Cheema

Momena is passionate about evangelizing the security and privacy capabilities of AWS services through the lens of global emerging technology and trends, such as Internet of Things, artificial intelligence, and machine learning through written content, workshops, talks, and educational campaigns. Her goal is to bring the security and privacy benefits of the cloud to customers across industries in both public and private sectors.


[$] The return of the lockdown patches

$
0
0

Post Syndicated from jake original https://lwn.net/Articles/784674/rss

It’s been a year since we looked in on the
kernel lockdown patches; that’s because things have been fairly quiet on
that front since there was a loud and
discordant dispute
about them back then. But Matthew Garrett has been
posting new versions over the last two months; it would seem that the
changes that have been made might be enough to tamp down the flames and,
perhaps, even allow them to be merged into the mainline.

Укрепване на върховенството на закона в ЕС

$
0
0

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/04/03/rule-of-law/

Според съобщение до медиите от 3 април 2019 г. Европейската комисия започва широка дискусия на европейско равнище относно начините за по-нататъшно укрепване на върховенството на закона. По-нататък се  съобщава, че

Комисията вече използва множество инструменти за внимателно наблюдение, оценка и ответни действия във връзка с проблеми в областта на върховенството на закона в държавите членки, в това число уредбата за принципите на правовата държава, процедурата по член 7, параграф 1 от ДЕС, производствата за установяване на нарушения, а също и европейския семестър, информационното табло на ЕС в областта на правосъдието или механизма за сътрудничество и проверка (МСП). Въз основа на натрупания досега опит чрез всички тези инструменти Комисията очертава днес три области на действие, които биха допринесли за постигането на по-ефективно прилагане на принципите на правовата държава в Съюза:

· По-добро популяризиране: стандартите и съдебната практика, отнасящи се до върховенството на закона, невинаги са достатъчно добре познати на национално равнище. За да се промени това, следва да се положат повече усилия за популяризирането на тези стандарти и съдебна практика на национално равнище. Това може да бъде постигнато например чрез осведомителни кампании за обществеността, общи за ЕС подходи, които спомагат за насърчаването на по-солидна култура на зачитане на върховенството на закона сред институциите и професионалистите, трайна ангажираност съвместно със Съвета на Европа и участие на гражданското общество на регионално и местно равнище.

· Ранно предотвратяване: въпреки че държавите членки носят основната отговорност за това върховенството на закона да се зачита на национално равнище, ЕС може да предостави значителна подкрепа за изграждането на устойчивост на ключовите системи и институции. Редовното сътрудничество и диалог биха могли да допринесат за по-задълбочено разбиране на ситуацията и промените в сферата на върховенството на закона в държавите членки, а също и за решаването на ранен етап на проблемите, свързани с върховенството на закона.

· Ответни действия, съобразени с нуждите: разнообразните предизвикателства във връзка с върховенството на закона изискват разнообразни ефективни ответни действия. Комисията ще продължи да следи за правилното прилагане на правото на ЕС посредством производствата за установяване на нарушения. В специфични области на политиката също може да е подходящо да се възприемат различни подходи, какъвто е например случаят с предложението на Комисията относно защитата на финансовите интереси на ЕС. Освен това може да се помисли за подобрения на действащата уредба за принципите на правовата държава, включително възможността за ранно информиране на Европейския парламент и Съвета и подкрепа от тяхна страна, както и ясни графици за продължителността на диалозите.

Съобщение: По-нататъшно укрепване на върховенството на закона в Съюза – актуално състояние и възможни следващи действия

Информационен документ: Инструментариумът на ЕС в областта на върховенството на закона

Съобщение за медиите – Европейска гражданска инициатива: Комисията регистрира инициатива под надслов „Зачитане на върховенството на закона“

Съобщение за медиите – Върховенство на закона: Европейската комисия започва производство за установяване на нарушение в защита на съдиите в Полша от политически контрол

Learn about AWS Services & Solutions – April AWS Online Tech Talks

$
0
0

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-april-aws-online-tech-talks/

AWS Tech Talks

Join us this April to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Blockchain

May 2, 2019 | 11:00 AM – 12:00 PM PTHow to Build an Application with Amazon Managed Blockchain – Learn how to build an application on Amazon Managed Blockchain with the help of demo applications and sample code.

Compute

April 29, 2019 | 1:00 PM – 2:00 PM PTHow to Optimize Amazon Elastic Block Store (EBS) for Higher Performance – Learn how to optimize performance and spend on your Amazon Elastic Block Store (EBS) volumes.

May 1, 2019 | 11:00 AM – 12:00 PM PTIntroducing New Amazon EC2 Instances Featuring AMD EPYC and AWS Graviton Processors – See how new Amazon EC2 instance offerings that feature AMD EPYC processors and AWS Graviton processors enable you to optimize performance and cost for your workloads.

Containers

April 23, 2019 | 11:00 AM – 12:00 PM PTDeep Dive on AWS App Mesh – Learn how AWS App Mesh makes it easy to monitor and control communications for services running on AWS.

March 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application.

Databases

April 23, 2019 | 1:00 PM – 2:00 PM PTSelecting the Right Database for Your Application – Learn how to develop a purpose-built strategy for databases, where you choose the right tool for the job.

April 25, 2019 | 9:00 AM – 10:00 AM PTMastering Amazon DynamoDB ACID Transactions: When and How to Use the New Transactional APIs – Learn how the new Amazon DynamoDB’s transactional APIs simplify the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables.

DevOps

April 24, 2019 | 9:00 AM – 10:00 AM PTRunning .NET applications with AWS Elastic Beanstalk Windows Server Platform V2 – Learn about the easiest way to get your .NET applications up and running on AWS Elastic Beanstalk.

Enterprise & Hybrid

April 30, 2019 | 11:00 AM – 12:00 PM PTBusiness Case Teardown: Identify Your Real-World On-Premises and Projected AWS Costs – Discover tools and strategies to help you as you build your value-based business case.

IoT

April 30, 2019 | 9:00 AM – 10:00 AM PTBuilding the Edge of Connected Home – Learn how AWS IoT edge services are enabling smarter products for the connected home.

Machine Learning

April 24, 2019 | 11:00 AM – 12:00 PM PTStart Your Engines and Get Ready to Race in the AWS DeepRacer League – Learn more about reinforcement learning, how to build a model, and compete in the AWS DeepRacer League.

April 30, 2019 | 1:00 PM – 2:00 PM PTDeploying Machine Learning Models in Production – Learn best practices for training and deploying machine learning models.

May 2, 2019 | 9:00 AM – 10:00 AM PTAccelerate Machine Learning Projects with Hundreds of Algorithms and Models in AWS Marketplace – Learn how to use third party algorithms and model packages to accelerate machine learning projects and solve business problems.

Networking & Content Delivery

April 23, 2019 | 9:00 AM – 10:00 AM PTSmart Tips on Application Load Balancers: Advanced Request Routing, Lambda as a Target, and User Authentication – Learn tips and tricks about important Application Load Balancers (ALBs) features that were recently launched.

Productivity & Business Solutions

April 29, 2019 | 11:00 AM – 12:00 PM PTLearn How to Set up Business Calling and Voice Connector in Minutes with Amazon Chime – Learn how Amazon Chime Business Calling and Voice Connector can help you with your business communication needs.

May 1, 2019 | 1:00 PM – 2:00 PM PTBring Voice to Your Workplace – Learn how you can bring voice to your workplace with Alexa for Business.

Serverless

April 25, 2019 | 11:00 AM – 12:00 PM PTModernizing .NET Applications Using the Latest Features on AWS Development Tools for .NET – Get a dive deep and demonstration of the latest updates to the AWS SDK and tools for .NET to make development even easier, more powerful, and more productive.

May 1, 2019 | 9:00 AM – 10:00 AM PTCustomer Showcase: Improving Data Processing Workloads with AWS Step Functions’ Service Integrations – Learn how innovative customers like SkyWatch are coordinating AWS services using AWS Step Functions to improve productivity.

Storage

April 24, 2019 | 1:00 PM – 2:00 PM PTAmazon S3 Glacier Deep Archive: The Cheapest Storage in the Cloud – See how Amazon S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices significantly lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data offsite.

[$] How to (not) fix a security flaw

$
0
0

Post Syndicated from jake original https://lwn.net/Articles/784758/rss

A pair of flaws in the web interface for two small-business Cisco routers
make for a prime example of the wrong way to go about security fixes.
These kinds of flaws are, sadly, fairly common, but the comedy of errors
that resulted here is, thankfully, rather rare. Among other things, it
shows that
vendors may wish to await a
real fix rather than to release a small, ineffective band-aid to try to close
a gaping hole.

Learn About Amazon Pinpoint at Upcoming Events Around the World

$
0
0

Post Syndicated from Hannah Nilsson original https://aws.amazon.com/blogs/messaging-and-targeting/learn-about-amazon-pinpoint-at-upcoming-events/

Connect with the AWS Customer Engagement team at events around the world to learn how our technology can to help you better engage with your customers. Get demos on recent feature releases, discover how you can use Pinpoint for your specific use case, and attend informative sessions to hear how companies around the world are using AWS Customer Engagement solutions to deliver better experiences for their customers. Plus, read below to find out how Amazon Pinpoint and Amazon SES both enable you to create innovative email experiences with the recent AMP Project launch.

AWS Customer Engagement in the news: Amazon SES and Amazon Pinpoint support build the future of email with AMP

The AMP Project’s mission is to enable more user-first experiences on the web, including web-based technology like email. On March 26, the AMP Project announced that they are bringing AMP technology to email in order to give users an interactive, real-time experience that also keeps inboxes safe.

Amazon Pinpoint and Amazon SES both provide out-of-the-box support for AMP for email with no additional configuration. This allows you to easily create experiences for your customers such as  submitting RSVPs to events, filling out questionnaires, browsing catalogs, or responding to comments right within the email.

Read the AMP announcement for more information about these new capabilities. To learn how to use the AMP format with Amazon SES, visit the SES Developer Guide. To learn how to use the AMP format with Amazon Pinpoint, read this Amazon Pinpoint API Reference. View these instructions for more information on how to add AMP to an existing email.

Amazon Pinpoint has been busy building. You can now:

  • Learn how to set up an email preference management web page that enables customers to manage their email subscription preferences. Read now.
  • Learn how to set up a web form that collects information from new customers, and then sends them an SMS message to confirm that they want to receive content from you. Read now.
  • Use Amazon Pinpoint in the US West (Oregon), EU (Frankfurt), and EU (Ireland) regions in addition to the US East (Virginia) region. Learn more.
  • Deliver voice messages to your users with Amazon Pinpoint Voice. Learn more.
  • Set up campaigns that auto-send messages to your customers when they take specific actions. Learn more.
  • Detect and understand issues impacting your email deliverability with the Amazon Pinpoint Deliverability Dashboard. Learn more. 

Meet an Amazon Pinpoint expert at these upcoming events. We will teach you how to take advantage of recent updates so that you can create better engagement experiences for your customers. Plus, we can give you an inside look on what’s on our roadmap, and we’ll be giving out custom Pinpoint swag!

AWS Summit, Singapore 

April 10, 2019
Singapore Expo Convention & Exhibition Centre
Amazon Pinpoint will host an informative session about our Customer Engagement solutions at the AWS Singapore Summit. In this session, we will describe how AWS enables companies to better understand and engage their customers with personalized, timely, and relevant communications on multiple channels. You will also learn how Disney Streaming Services is using Amazon Pinpoint to engage their users.
Register for the Summit here.

“Mobile Days” at the AWS San Francisco Loft   

April 24, 2019 
AWS San Francisco Loft
Join us for an engaging day of discussion and education. Amazon Pinpoint experts will host the following sessions:

  • 2:30pm – 3:30pm: How Do You Measure Customer Success? Featuring Amazon Pinpoint. 
  • 3:30pm – 4:30pm: Using ML to Create Enhance Your Marketing. Featuring Amazon Pinpoint and Amazon Personalize. 

Space for this event is limited, please reserve your seat here.

AWS Summit, Sydney

May 1-2, 2019
International Convention Centre (ICC), Darling Harbour, Sydney
Don’t miss the customer engagement session on April 30th. This session, part of Amazon’s Innovation Day event, features a keynote address by Neil Lindsay, Vice President of Global Marketing at Amazon. The session explores how AWS technologies power organizations that deliver customer-centric innovations. Learn about how Australia’s largest brands and digital agencies use AWS technologies to engage customers, build new business models, and transform customer experiences.
Register for the Summit here

AWS Summit, Mumbai

May 15, 2019
Bombay Exhibition Center, Mumbai
The Amazon Pinpoint team will be at the “Ask an Expert” booth. Stop by to meet the team, ask questions, and pick up Amazon Pinpoint swag!
Register for the summit here

Viewing all 42165 articles
Browse latest View live


Latest Images