Knowing Laravel: the framework that revolutionized PHP

image

If you’re still there after reading the title means you have not lost hope of finding a good PHP Framework.

The discussion between “framework yes” or “no framework” is long and I think the stronger fundamentals of the detractors are based on the options available today. But at some point we feel the need not to reinvent the wheel, especially when we have to implement joint actions (friendly URLs, session management, database management, etc).

That’s why we want to talk about Laravel, the framework that people has been taking a lot lately, taking strength in the last months and with the release of its version number 4 was positioned as a very interesting option to explore.

In this post we review the strengths and in subsequent notes I’ll show some practical examples of how to use it and the benefits it has.

The basics

The framework already has what we imagine: Router, Models, Layouts, Views, Controllers, etc. For templates it uses a proprietary engine called Blade, which has some fancy helpers.

Artisan

A client console that allows you to execute specific framework commands. It is versatile, powerful and even extends to allow us to create our own tasks so that they are available at any time from this client.

Composer

Since the last version, 4, is available directly from Composer the new package manager and PHP dependencies. This allows us to modify and add the packages we want even allowing to generate our own packages, configure them in the composer.json and include them in our application with a composer update. Such is the use and benefits of Composer that Laravel uses many other packages frameworks like Symfony (Artisan is an extension of its console) among others.

Migrations

By now we should all use a version control system for our code (and if we do not, it’s time). What Laravel incorporates is the ability to keep track of versions also of our database. This, combined with a seeding system allows us to have our application downloaded and running in a few commands. Example:

$ Git clone https://github.com/laravel/laravel.git
$ Composer update 

And to configure the database:

$ Php artisan migrate
$ Php artisan db: seed 

Ready! we have our application downloaded, installed and with its database configured.

Resources

From this latest version, the concept of resource that let us schedule our controllers as if they were a Rest API that are consumed in the same way, is incorporated. That is, to add an element we do a POST /resource to get listed a GET /resource to get an item GET /resource/{id} , etc. This allows us to release our application and consume it as an API without changing anything, and enabling us to integrate it and to be integrated with other systems as well.

Eloquent ORM

Object-Relational Mapping for our database. It allows us to interact with our database as if each table were a model, faithfully respecting the MVC division. It is very easy to use, and above all it handles a very expressive syntax for easy reading and understanding.

Generators

One of the things that I like is the time they save. They are based on a package created by Jeffrey Way that can be added to composer.json to automatically download it at the start of our app’s development. What we need to do is add it as required:

  "Require": {
     "Laravel / framework": "4.0 *."
     "Way / generators": "dev-master"
 } 

After a composer update we’re ready to use it. This will allow us to generate a template of any of the items we need, whether it be a Model, Controller, Seed, View, Migration, etc.

A practical example to understand the comfort of using it: let’s create a resource to manage the users of our application:

$ Php artisan generate: --fields resource user = "name: string, name: string, username: string, password: string" 

The result of the command creates the following:

  • Views to the list, detail, edit, and sign ups
  • General Layout (it creates it if is the first time you run it)
  • Model Resource (User)
  • Controller Resource with all its methods
  • Migration to create the table with all its management methods.
  • Modifies the file routes.php (containing mapping of all routes) to add the Resource
  • Creates the class to do the Unit Testing.

In short, once you’re done, you can access http://localhost/usuarios and we will have a fully functional ABM to extend it and / or modify the way we want. This is an excellent way to shorten the initial development time especially when we need to test the implementation quickly to make better decisions.

And much more

There were many interesting things out, as it comes with a driver to use Redis storage, support for Unit Testing, Validations, etc. This is just a sampling of some of the features. You can see the rest of the magic in the official documentation .

In the coming weeks we will develop our first application using Laravel.

Advantages and Disadvantages of CloudLinux

CloudLinux is an extremely powerful operating system built on top of the open source operating system, CentOS. We’re proud to offer it – and it works tremendously well on the infrastructure. Just recently our company switched their shared hosting servers over to CloudLinux, and have seen loads drop dramatically on their servers. Does that mean that CloudLinux is right for you?

Do I have to run CloudLinux with Whitesystem? Does CloudLinux make me a cloud provider?

Despite its name, CloudLinux is not required to be used on cloud hosting infrastructure, nor is it required to be used on our network. We offer multiple operating systems, including CentOS, FreeBSD, Ubuntu, Windows and many more. Cloud Linux can also be used on dedicated servers, and traditional VPS servers without problem. As well, using CloudLinux does not make someone a cloud hosting provider. Cloud hosting is single-handedly derived from the underlying infrastructure; typically cloud hosting requires that there multiple servers working together to provide resources on demand, however a formal definition has not exactly been defined.

Is CloudLinux right for me?

Depending on how you’re using your cloud VPS from us determines whether you really need CloudLinux. While it’s an extremely stable operating system, with numerous security enhancements it may not be something that is necessary for you to run. CloudLinux really shows its value when you’re hosting multiple websites on the same server. It does this by placing each user inside what it calls a LVE (Lightweight Virtual Environment). This LVE limits the amount of available resources to each user, which makes it so one single user cannot bring down the entire server. The actual resources limited include the CPU, RAM, and the total number of processes running by the user, like PHP scripts. The amount of resources that the user is limited to is completely definable by you, so CloudLinux actually makes it possible to offered structured shared hosting plans based of CPU/RAM usage, instead of the usual disk space and bandwidth. When a user reaches the defined resource limits, a 503 error is displayed to the website’s visitor, preventing the user from using up additional resources.

Conclusion – We have received several tickets from our customers asking why their sites are loading so slowly, or showing up some apache errors when running some applications or accessing their websites, and if we study their cases, practically 99% of them are high-demanding sites, with more than 10K daily page views, busy forums and communities with several members connected at once. If this is happening to you, do not hesitate to contact us, we can apply some changes to the LVE to make drastic performance changes and improve the speeds on your site, but always remember that you are on a shared service.

Coming up next a post about solidarity!

Version Control: from SVN to GIT

What is version control?

It is a software that allows you to manage different versions of the application you are working on. When we speak of versions we refer to any changes made in any of the files.

The benefits of using these tools are many. For example, if we are working on an application that has a folder hierarchy moderately large (any PHP framework falls into this category) we avoid being remembering which files are modified in each folder, it is also in charge of running the update of those files in the server. On the other hand, if we will work with others in the same code, it is responsible for the unification of local versions of each developer and resolve conflicts that may arise from having modified the same file by different people.

SVN

Probably the first contact with version control is this. SVN is a version control that maintains a central repository with which synchronize all environments we use: all locales for each of the developers of the project, the production environment (which is also a client of the central repository) and development environment and / or testing if used.

To begin working with a repository, what we do is a checkout that allows us to download the content repository. Once we have our local version of the repository, we can begin to work as usual, considering that if we delete a file from the repository, it is important to make it through SVN because otherwise when trying to upload the changes you will not find those files.

If we need to add a file, we can do an add to add it to the repository, a delete to remove it or we can set up our SVN to ignore certain files or directories (this is very useful for configuration files, cache folders, etc).

All these commands are used when we manage our repository from a console, but also can use a client that has a graphical interface such as Tortoise SVN .

Finally, when we finish with all the changes and modifications, we perform a commit to publish the changes on the server. 
Each time we return to a project, we must update by performing update to download all the changes that have been made ​​by other users.

GIT

This version control has a great difference from SVN and is working in a distributed manner: no server and copies, but each copy is a complete version of the repository.

His creation is owed ​​to the great Linux Torvalds who developed the project to manage the Linux kernel. This is important because it speaks of the size of projects that can dish and besides being very fast, has a management system for different development branches of the same powerful and yet easy project.

A final difference is the addition of an intermediate state between modified and production, which in principle could make it look more complicated start using.

In the next few posts will explain more in depth how to use SVN and GIT. How do you work? Do you use any of these tools? Tell us!

Root Reselling – Common Questions

We introduce a new service to our product line, this is a system that in a totally dedicated and managed environment allows the user to have all the benefits of a Reseller account (with many special features) but in turn, grant you root access (administrator access) to your WHM panel, which allows you to be always ahead in your businesses, and manage all the applications in your server in order to provide the best service in the network and everything done as however you want to do it.

Go to read more!

This product combines two existing products, a Reseller account plus a VPS, which results in a space just for you (with a panel to restart your own server and manage power state and data transfer, etc) but all the support of Whitesystem as a fully managed service and scripts free of charge so as cPanel, Softaculous, ClientExec and many more.

What is the difference between a Root reseller and VPS?

The main difference between a VPS and a Root reseller is the amount of resources and features (of all types) of which you have at your disposal, you get the full potential of the machine where your account is installed (The full potential of the CPU), low cost or even free stuff on network features such as SSL Certs or Dedicated IPs and also we handle for you (Everything related to system upgrades, everything, we will do it for you).

Why hire a Root Reseller with you?

Okay, first, nobody else offers Root Resellers, is a Whitesystem original product and does not use any third party scripts for this reselling, you get root access offering the ultimate level of reselling. In addition, you’ll get for the price of a reseller account, free Unlimited SSL certificates from Globalsign, Dedicated IPs just for 1.00 USD, Free ClientExec account and Reseller status (ClientExec licenses at only 2.00 USD/m gives you the chance to resell them), eNom account, Domain reseller account at 7.50 USD in the most popular TDLs, robust servers (24GB of RAM, Intel Dual QuadCore CPUs), and much more.

Well, and what about the prices?

Prices are still being developed because of all the features we offer, although the default account may have a price of 30.00/m USD (which is priced only slightly higher than Hostgator reseller accounts or iWeb reseller)

Where are these servers Located?

The servers are located in Chicago, IL, in a SingleHop DC.

I hope you like it, soon we will publish a comparative table with all the special features we offer, comparing prices and utilities with other companies and see how this revolutionary product will be beneficial for you and your customers.

Reducing CPU usage for WordPress users!

WordPress is one of the most demanding content management system of recent days. Most of the users these days use wordpress for their blogs or websites. Around 85% sites of our servers are using wordpress and most of the clients are utilizing multiple wordpress blogs for their business. WordPress has been found to be using pretty good sum of CPU and Memory. Today’s shared hosting environments are more limited based on the CPU and memory rather than the Space and Bandwidth. It is always a wiser choice to spend little amount of time to reduce the overall cpu usage. This makes the blog running faster and hosting companies feel good to host sites which are nicer to their CPUs Here are some tips to reduce the CPU usage on a wordpress blog and improve the site performance.

 

One of the first plugin I suggest all the wordpress users to install is “wp-super-cache”. You can download this plugin here:

http://wordpress.org/extend/plugins/wp-super-cache/

It is pretty easy to install. But a documentation can always be found in wordpress site:

http://wordpress.org/extend/plugins/wp-super-cache/installation/

wp-super-cache is the fastest caching plugin for wordpress blogs. It is always better to serve it from cache instead of running select command for each user of your blog. Enabling super cache would potentially reduce the cpu usage around 60-75%. One thing you should make sure that you are not using multiple caching plugin. I have seen couple of users think using multiple caching plugin would provide better result, but probably it is a bad idea for your blog to mix up both caching algorithm and result a potential mess.

If you are running scheduled posts on your blog, then it is probably a better idea to run wp-cron.php using cronjobs. WordPress calls wp-cron.php each time a user comes into your blog which is fairly a stupid idea. I am not sure why wordpress does so, but calling it once a 2 hours seems enough. You can set the cronjobs from cpanel. To set the cronjobs every two hour, you would need to set the timing something similar to the following:

0 */2 * * *

This would run at the very first minute of each even hours of the day. In the command section use something similar:

php -q /home/cpanelusername/public_html/wp-cron.php

Replace cpanelusername with your original cpanel username. If you have added the blog as addon then probably, wp-cron.php is not in the public_html, but in a subfolder, so you would need to change the path accordingly, something similar to the following:

/home/cpanelusername/public_html/addondomain.com/wp-cron.php

A very well written article regarding the High CPU usage of wp-cron.php can be found here for your reference:

http://trinity777.wordpress.com/2008/10/28/wordpress-26-the-issue-of-wp-cronphp/

Two more interesting plugins which are frequently used by the clients can cause excessive CPU usage, they are “All in SEO Pack” and “Featured Gallery Plugin like Nextgen”. If you have no other option than using a gallery, then probably, you would have to stick with the Gallery, but I strongly suggest not to use all in seo pack. Using all these modules one by one is better than using this all in one plugin. A very well written article for WordPress SEO can be found here and I suggest you better read it before blindly installing All in one SEO pack:

http://yoast.com/articles/wordpress-seo/

A good percentage of users run autoblogs. Autoblogs are pretty popular in these days with wordpress. Autoblogs tends to take high CPU with their cron executions. There isn’t much you can do to reduce those certain high cpu usage time to time but a better idea to set the cronjobs at odd timing. For example setting the cron to run at 17 minutes of each hour may improve the performance instead of setting it at very first minute of the hour. Most of the users tend to use their crons at very first minute. It sometimes cause a little load issues when lots of cron tries to run at the same time. So using odd timing is truly a pretty decent idea for both parties. You should also find the best timing interval for your autoblog updates. A reasonable gap of 2-4 hours is always a better idea as it reduces the frequency of your cronjob. But if you have no other option than running it every hour, then just don’t think, put it for every hour

For any sort of additional help, please post a comment. Providing absolute free consultation of reducing high CPU usage. Moreover, we would be pretty happy to install all the module and reconfigure your blog to make sure it takes less CPU and loads faster. So never hesitate to contact us for help

How to stop wp-cron.php from firing!

By Dave from Mellow host

Recently, I have written an article about reducing the cpu usage for a wordpress blog. My post contains some information about the wp-cron.php, but it doesn’t explain how you can stop wp-cron.php from taking high CPU. Couple of our twitter followers and facebook clients were asking for a post describing how they can stop wp-cron.php from taking high CPU or firing up. Here are some small tricks to reduce the CPU usage from wp-cron.php.

 First of all, if you have root access to your server, you can eventually block wp-cron.php using mod_security, this would prevent wp-cron.php usage throughout the web server. But you can still call it using the cronjobs. How to setup wp-cron.php manual cronjobs, can be found in my blog post “Reducing CPU Usage for WordPress blog” post.

If you are on a shared hosting or want to permanently stop this culprit, you would need to stop spawning this php file. wp-cron.php calls cron.php file which is located under your wordpress root/wp-includes/ folder. Open the file in your file manager or using FTP browser and find the line stating:

spawn_cron( $local_time );

Now comment this line and you should stop wp-cron.php spawning everytime an user enters your site. You can comment it with two slashes as following:

// spawn_cron( $local_time );

You have to keep in mind, this would stop all sort of scheduled event as well.

I could discover another alternative that would stop wp-cron.php from running using http request, but would work fine using cronjobs. Open cron.php and find the following line:

if ( strpos($_SERVER[‘REQUEST_URI’], ‘/wp-cron.php’) !== false || ( defined(‘DISABLE_WP_CRON’) && DISABLE_WP_CRON ) )

Now, replace this line of code with the following:

if ( strpos($_SERVER[‘REQUEST_URI’], ‘/wp-cron.php’) === false || ( defined(‘DISABLE_WP_CRON’) && DISABLE_WP_CRON ) )

Make sure the immediate next line “return” must remain in the immediate line.

You can also put the following in your wp-config.php file to set DISABLE_WP_CRON global variable to TRUE:

define('DISABLE_WP_CRON', true);

At last, you should make sure wp-cron.php runs using cronjobs for your scheduled events, but if you don’t have scheduled events, then you better stay away from adding it in cronjobs as well. But if you do, then have a look at my previous post for setting up the manual cronjobs of wp-cron.php using cpanel.

Any of the above solution should prevent your wp-cron.php from firing and taking high CPU. Happy blogging

Confusing server load average explained!

From mellow host

Server load average is a pretty big word in web hosting industry. Customers trust servers with least CPU load. Moreover, I have seen they feel very secure when they are on a server averaging a cpu load lesser than 1. I am very familiar with a question on live chat desk from the new customers saying, what is your average cpu load. Now let me go into deeper in this discussion and see if I can find something new for you.

 There are many metrics, modern operating system provides to measure current system performance. CPU load average is one of such metrics. It is stored under the proc file system and readable from user space.

Now, lets come to what does this metric mean. I have found couple of articles explaining the definitions and they seem pretty good enough. One of the most reputable member of Webhostingtalk cum Moderator has one article explaining this on his blog:

http://whreviews.com/server-load.htm

Server load – just a number?

Well, yes, basically the server load is a number. This number is usually under the x.xx format and can have values starting from 0.00. It expresses how may processes are waiting in the queue to access the processor(s). Of course, this is calculated for a certain period of time and of course, the smaller the number, the better. A high number is often associated with a decrease in the performance of the server.”

So, it clearly states CPU load is going to let you know the amount of processes your server processor going to execute Or is it? Let me tell you something, it is a very wrong definition of CPU load. Let me show you something from the manual of “uptime” unix command:

“uptime gives a one line display of the following informa- tion. The current time, how long the system has been run- ning, how many users are currently logged on, and the sys- tem load averages for the past 1, 5, and 15 minutes.”

That means, the average you see, is not showing you the waiting processes, but the processes waited for past 1, 5 and 15 minutes.  (Solaris includes some runtime processes, but can not at all predict processes waiting for next 1 minute queue) So, if the cpu load is 4 and you have 4 cpus, does that mean 4 processes were waiting for cpu access in last 60 seconds? Does that seem pretty a lot for current RAID controlled hard drives and 4/8GB RAM servers? Just to note, linux kernel treats threads as a process. It is possible to improve the performance a lot using threads, this is why, most of the people are utilizing thread based models these days. So, the waiting 4 processes could be 4 threads as well when linux is concerned. And in some cases, threads can be served faster than processes.

We are done with the definition, now lets get into more deeper analysis of CPU load and CPU performance. I have seen people stating your CPU is using 100% or 200% CPU after seeing the past load average crossing the number of cpus. That is a completely wrong idea to measure the performance of CPU. You can never make 200% of your CPU. If that was possible, then, I highly doubt you could ever see any multi proc/core cpu. This metric is even not at all made to measure the performance of CPU. CPU performance depends on the time it remains idle. More idle CPU time means more stable CPU. Now, the question may come, how can I measure the idle cpu time. A system admin can do this using the “sysstat” software. Most of the linux distribution have it built in. You can also check the idle CPU using the top system command or sar. Now, how the system counts idle cpu time? It excludes the time it spent for user, system softwares or services and IO wait to count the idle CPU. Does the idle cpu time have any impact on Server Load? It may or may not. But you never know. One thing, I can assure you, more your Server load fluctuates means your idle CPU is getting more exhausted. So, you may measure the fluctuation of the CPU load to understand how your system is performing. So, let me tell you something what I believe on server/cpu load, more stable your server/cpu load, means more stable server you are on. You should find your sites loading pretty fast, and if not, its time to contact the hosting support. I have seen many good hosting companies would share the mpstat/sar output with their clients to make them feel free showing the right cpu usage for around 30 minutes or so.

Most of the system admins these days, tweak the server in a way to make sure it keeps more idle CPU time. I have seen CPU load of 10 on a 4 CPU systems with 60% idle cpu running a backup/log process with less priority. Less priority is causing lesser time slice for the backup/log process resulting more waiting CPU for the system, simple math. When the idle cpu time goes to 0-5% frequently, server becomes clumsy. It is not really a right idea to judge the cpu usage from server/cpu load average of linux system. Most importantly, misjudging this metric in determining CPU usage like 200% is what I don’t feel right!

Happy reading, and never feel guilty to ask explanation from your host, you put your most important data on their hard drives, so trust them! :)

By Dave from mellow host

Fixing YouTube problem with Opera 10 (Old Flash? Go Upgrade!)

Well, as Firefox says: -Well, this is embarrassing, also happens to Opera 10 with Flash, Its known if its a server-side problem related to YouTube, or if there is some kind of Bug in Opera 10.

Obviously, the browser version is the latest and also flash, so no doubt due to a YouTube failure, when only a few hours ago we could see it without problems.

After a quick search on Twitter (1, 2, 3, 4, etc …) actually confirms the suspicion, so searching on the internet I found a userscript that seems to fix it.
Here you can see a proof of this is clearly a YouTube programming problems.

To install it, we downloaded from this site.

In Opera, we go to the Youtube´s site, we right click it and select “Edit site options”, we’re going to the Options tab (maybe something like Programming). Click “Choose” and show it the folder where we saved that userscript.

Fixed!.

Another XBOX Snippet

This time this shows when you where “last seen” and what you were doing/playing. The snippet itself grabs the data from a script created by duncanmackenzie.net. The only downside to this snippet is that the data isn’t updated that often and on rare occasions doesn’t return any data, but I believe this is because of too many requests within a set time period. Remember to insert your gamertag instead of rentedsmile.

<?php

$ch = curl_init(‘http://xboxapi.duncanmackenzie.net/gamertag.ashx?GamerTag=rentedsmile’);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,4);
$data = curl_exec($ch);
curl_close($ch);

$xml = new SimpleXmlElement($data,LIBXML_NOCDATA);

foreach ($xml->PresenceInfo as $mystatus) {
    echo  ’<div id=”xboxlivestatus”>’ . $mystatus->StatusText . ’ : ’ . $mystatus->Info . ’ </div>’;
}

?>

This will be the output:

Offline : Last seen 12/07/09 playing Left 4 Dead 2