Posted by: jimmydlg | February 17, 2009

CSS Positioned Elements

Well I’m not sure what exactly to write on this week, because I’m familiar with the use of positioned elements, although I haven’t often used them in my work. I primarily stuck to table layouts because for a long time it’s what I had to work with and worked fine, but over the years as my designs evolved, the complexity required by table layouts grew by quite a bit and lead to difficult to understand and complex templates.

Most of the work I do uses some sort of templating system, where I design the page once, and the design the content that goes into particular portions of the page. This is through ASP.NET and what I’m referring to are user controls, or modules within content management systems. So comlpexity wasn’t too much of a problem because I only deal with the layout of the page when I want to change the layout, and not elements within the page which are abstracted from that into other files.

But on occasion when I’d need to change something, I’d invariably be stuck having to decipher year old code and trying to remember just why I embedded a table 3 deep in spanned several rows and added placeholder images to keep the cells visible.

From the bit of CSS positioned development I’ve done recently, I can see this is definately going to be easier to maintain.

http://www.yourhtmlsource.com/stylesheets/csslayout.html

http://webdesign.about.com/od/advancedcss/a/aa061307.htm

http://www.elated.com/articles/css-positioning/

Posted by: jimmydlg | February 9, 2009

I didn’t find a whole lot of useful Roommate Websites, which is a very encouraging result. Below I’ve listed the sites I did find that have features I may want to consider using or emulating myself.

http://www.roommates.com/
This site seems to have a very compact look and feel. Like most other roommates sites it had a QuickView or QuickSearch control on the left panel. It does seem a little busy though, and I can’t imagine the random “Roommate Pick” on the front page would ever be of use.

Strongest Feature: The messaging application built into the site has the look and feel of an email program. That makes it seem intuitive to use and familiar.

http://www.roommateclick.com/
This site was pretty slow to load, but it has an interesting home page.  They have a quick search bar on the front page, whic some extra controls for creating profiles, which seems nice. This site is very minimalistic, and the profile displays are very consise.

Strongest Feature: The search results are very compact and show the user quickly how many results were found and let them easily browse through the results. Also there’s a “send interest” options which informs the other person you’re interested, but doesn’t require you to type messages or anything.

http://www.easyroommate.com/index.aspx
The front page seems clean and consitently styled. This site has pictures of the newest rooms and roommates, but again, I’m not sure region specific items on the home page of a site that may have 30 or 40 regions is all that useful. It’s pretty, and it takes up space, but it doesn’t do much more than that. The new roommate and rooms counter is nice, it shows the site is busy and if it’s busy that means there may be more traffic if you’re in the process of looking.

Strongest Feature: The results option shows you matches that have been set up by you when you created the account with a pre-set of criteria, but it also allows you to modify those slightly in case you don’t have enough matches.

http://www.roomster.com/
Again, pictures and profiles on the home page, not so interesting. The navigation bar on this site is very minimalistic, I like that and intend to use one similar. There’s also an “Answer my questions” section where people can post their own custom questions, I kind of like that. The profile seems a little busy, but some of it looks auto-generated, which could be kind of interesting to use. It would give the profile more body and make it more interesting to read.

Strongest Feature: Popular Cities are listed right on the home page. If I’m browsing from one of them, I think I’d be much more likely as a user to click on them and begin viewing the site’s offering faster than if I had to fill out a search box.

http://www.roommatenation.com/
Well, this is an example of a site I wouldn’t want to use any features from. Although, I do like that you can get right to the profiles before signing up, and then sign up once you’ve found something interesting. This site is a perfect example why my idea should exist, there is so much “noise” on this site, and it looks like some of the profiles may be really old. I’ll want to be sure to only show active profiles, or profiles that have been used only recently.

Strongest Feature: Well.. it serves as a good example of what NOT to do I think.

Posted by: jimmydlg | February 1, 2009

CSS Float Layouts

Well I have no shame in admitting to this point I used tables primarily as my layout devices. They were easy and I’ve been using them forever it seems. I’m quite enjoying using float layouts however.. it seems like it takes a slight bit more work at first, as I have to remember the widths and sizes of all the elements and add them up as I go along (lest they should wrap on me 🙂 but that’s not so bad.

It is quite nice from a code perspective to be able to fill in server content tags with minimal interference in the way of HTML.. I mean, honestly:

<div id="navigation">            
    <uc1:Navigation ID="Navigation1" runat="server" />
</div>
<div id="content">
    <asp:ContentPlaceHolder ID="contentMain" runat="server" />
</div>

Is SO much nicer than:

<table>
    <tr>
        <td class="navigation">
                <uc1:Navigation ID="Navigation1" runat="server" />
        </td>
        <td class="content">
                <asp:ContentPlaceHolder ID="contentMain" runat="server" />
        </td>
    </tr>
</table> 

And this is with the simplest of cases. The method using float layouts also makes more sense when reading back the code. After looking at some of my more complex and recent projects where floated layouts have been my design method, I’ve found it’s easier to maintain the code, and I’m not nearly as reluctant to go back and make fixes (for fear of having to recall what I was trying to accomplish the first time.)

This has probably been one of the more useful transitions in programming practice I’ve gone through in a while, and I look forward to learning more useful practices.

http://webdesign.about.com/od/advancedcss/a/aa010107.htm

http://www.smashingmagazine.com/2007/05/01/css-float-theory-things-you-should-know/

http://websitetips.com/css/tutorials/layouts/#floats

Posted by: jimmydlg | January 25, 2009

Meaningful Markup

I’ve been familiar with the concept of creating pages with meaningful markup in mind, even though I in recent projects I haven’t put those concepts into practice much. It’s not that I disagree with the concept, it’s just never been a point of concentration in the work that I’ve been tasked to complete.

I do, however, forget (as Denis Defreyne points out in his article about meaningful markup,) that not everyone (and everything) can make use of all my carefully crafted styled and designed sites. Search engines and users that don’t have a graphical or even full desktop browser could possibly benefit from having a site that concentrates on seperating content from delivery by avoiding certain legacy tags and keeping the content of the page in mind.

Robert Nyman suggests removing all the CSS, scripting and possibly even images when examining the content of your page, to see if the content is well organized and connected, and even though I had never considered this before, I could see how this could be useful. My typical approach to reviewing the completion of a website is to take a step back from it, and see how the page flows graphically. This technique could certainly help me evaluate the content as well, so that it is both presentaionally pleasing, and semantically appropriate as well.

While one of the tenants of meaningful and semantic markup is to identify content with meaningful class names (when using CSS,) I don’t completely agree that maintaining some basic presentation references isn’t warranted in some cases. For example, I commonly define normal text as the class “normal”, and this is probably a habit I won’t be breaking for a while. Designers can get carried away of course with naming classes after formatting, but to me, saying that text should appear “normal” is both presentational and semantic. While I may discover a better way to identify normal content on a page instead of defining it as “normal,” what I can’t see doing is blanketly assigning default font style and size definitions to as many tags on a page as I can think of. I actually enjoy having to be explicit about how I want certain content to appear, but who knows, maybe that will change in time.

Posted by: jimmydlg | January 18, 2009

Business Roommates

As my education progresses, I find myself tasked with creating a useful website. Almost immediately I knew what I wanted to do, which was as the title of this entry suggests, create a website that targets business professionals who may need to room with someone while they travel for their careers. I’ve actually had roommates before, and the most successful roommate situations I’ve been in have been with others who are business professionals.

There seems to be a definite lack of support for individuals seeking rooms in this very particular market segment. Most sites I’ve encountered either diminish the importance of matching like minded individuals, or inundate the user with so much information that searching for potential roommates or posting an ad to be found by others is prohibitive, time consuming and ultimately results in hardly any matches at all.

In addition, I’ve seen many lackluster sites (in both appearance and functionality) charging high prices for the chance to experience these deficits. I think the market as it stands could benefit from a site that’s both professional in its appearance and functionality, as well as its audience. The site I intend to develop will cater to those business professionals in order to help eliminate wasted time and poorly matched roommate candidates. It will also be of course developed as XHTML 1.1 and be fully XHTML compliant.

Some of the resource I plan to use in this development effort are listed below:

http://validator.w3.org/

http://www.w3schools.com/CSS/CSS_reference.asp

http://www.istockphoto.com/index.php

Posted by: jimmydlg | December 15, 2008

APIs and Mashups

An Application Programming Interface is a general way of describing a set of functions or methods that allow developers to interface with the application, service, or technology in question. In the programming world, APIs and their documentation are entrenched in day to day life and are a part of every programmers workflow.

APIs can take an imeasurable number of forms, but among all the many forms they take is the constant idea that they are designed to make a technology that has been developed accessible to others. They allow for a consitent way of interfacing with other developers and are usually very well documented with examples and technical manuals (this is because if someone goes through the trouble of developing an API, it’s most likely because they want someone else to have access to it, and as such, usually explain how that access works.)

Here are some examples of the APIs I use in my daily workflow:

 So what does it mean to consume an API?

Typically, when a developer is working with an API, they’re working with a set of tools or in a programming language that the API is compatible with. As I mentioned before, the number of combinations of APIs and consumers of APIs is immeasurable because not only are there many many APIs already out there, but developers are constantly making new APIs every day.

For this example, I’ll go with something I’m familiar with and explain the consumption of one of the Microsoft.NET API statements I use frequently.

Microsoft.NET is a language that at it’s core has a component called the Common Language Runtime. This allows for anyone to write their own languages that connect the syntactical differences of other languages with Microsoft’s Intermediary Language (MSIL) so that it can run through the CLR. For this example, I’ll be using the language Visual Basic.NET.

Programming Languages are made up various components. Keywords, Statements, Functions, API Calls, Declarations, Variables, and many other components. In the following code example, I’ll seperate out the language from the API to clearly show what I mean by Application Programming Interface.

In Microsoft’s development of the .NET Language, they did alot of lower level ground work for developers so that they didn’t have to reinvent the wheel to accomplish common tasks. This is really the benefit of an API, they spent the time and research into developing the internal specifics of the language and it’s goals so that we could simply arrange those components in a way we find useful to ourselves.

Take for example the need to determine the name of the computer an application is running on. If I have a programming language that’s running on top of a runtime designed to execute that language, in order to find out the name of the computer I’m running on I would need to leave the bounds of my program and venture into the operating system’s information to do so. Microsoft has given .NET Developers a simple way to do this in their programs through means of an API that allows access to the Machine’s Name on the current Operating System.

In order to access the computer’s name within the .NET Framework, I would simply use the following code:

strComputerName = My.Computer.Name

“My.Computer.Name” is the API Microsoft has made available in order for developers to interface with the operating system to determine the name given to the computer. It is well documented (as are all their APIs) and the documentation contains everything necessary for the knowledgeable developer to use them.

Everything on your computer screen uses an API, at the most basic level, programs interface with the operating system just to run themselves through the OS’s API. In fact, even the OS interfaces with a sort of hardware API through the hardware abstraction layer so it can run. And these days, Web 2.0 sites are extending the idea of APIs out of our computers, and across the internet by making entire websites available through the same type of well documented API.

There are two very common API technologies used on the internet that I’ll be working with in this discussion. They are RSS feeds and XML Web Services. Both technologies are based off XML, which became important as the number of differing operating systems and information sources grew rapidly, and developers needed an agreed upon standard way of communicating with eachother.

We’ve all seen examples of RSS feeds, but here is an example of a Web Service. I created this sample service to add two numbers together and return the result in XML. The resulting function AddNumbers can be consumed by anyone on the internet, and my application’s logic (simple as it may be in this case) can easily be integrated with the development efforts put forth by others.

There are many services out there that offer to ease the development of integrated APIs. One such tools, Yahoo Pipes allows users to merge many types of data sources and perform relatively complex operations on those sources to produce an output tailored to each individual application.

As an example of the usefulness and ease at which you can integrate these diverse services, I have create a pipe that uses my sample XML Web Service that adds to numbers to mashup information from an Airline Travel provider to give me a custom set of data. In this example scenario, I have a service that tells me the price it can get for an Airline ticket provided some input. Now, I don’t actually have a service that does this, but for my example purposes, imagine that my service which adds two numbers would do this for me. So I’ll provide it with the airfare cost the deal RSS feed tells me, and it’ll report back to me the price at which it can give it to me.yahoo_pipe

As you can see here in my pipe, I begin by selecting the feed:

Boston Deals Across America

I then filter out all non sale items, since I’m only interested in items with pricing information. After that, I use a regular expression to extract just the pricing information from the title.

Next, I build a request that will be sent to my Pricing Web Service (in this case, the Web Service that adds two numbers,) and then submit the results to my Web Service:

Add Numbers Web Service

The Web Service returns a price (which would be the price it could offer me the fare for if this were real) and I then filter off that to find deals that are less than $250.

As you can see, I integrated two very distinct services that could be used for very different things to reach a very specific goal that helps me, which is to find flights to or from Atlanta based on special pricing that ends up being less than $250

So, with available APIs to just about anything and everything, we now have within our reach, the possibility to integrate these applications with eachother in the form of mashups or other custom development efforts that tie two or more of these technologies together.

Some of the more useful ones I’ve found are Microsoft Virtual Earth integrating with Weather.com to make an interactive weather map that lets you zoom in to a very precise location and level to view the radar information for your street if you want. Weather Underground has done something similar with Google Maps, which shows a realtime map of current reported temperatures and wind speeds.

Another very useful mashup I’ve used is the Texas Department of Transportations integration with Google Maps to show near realtime display of all reported traffic related incidents in the DFW Metroplex.

From a design perspective, this is an extremely useful trend of technologies. In fact, in one of my recent contracts, I was tasked to display the location of earthquakes that I had processed damage estimation off of by integrating with XML APIs provided by the University of Alaska for the Trans Alaskan Pipeline.

Instead of having to graphically render the location of each reported earthquake through my own graphic APIs, I was able to use Google Map’s URL based API for displaying a map at a particular location and in a particular view. This allowed me to cut the development time of this specific area and functional requirement of the site drastically, saving me the effort of recreating what Google had already done (and done better than I could have.)

APIs are everywhere and essential. With their extension to the internet in the form of consumable RSS feeds and interactive XML web services, the possibilities for design and development are limitless. We can reuse complicated processes with relative ease and combine them in many new and exciting ways and spend more time concentrating on our site’s design, and less time reinventing the work of others.

Posted by: jimmydlg | December 6, 2008

XHTML and CSS

It’s almost harder to talk about a technology I use in my everyday life than it is to talk about a technology I’m just learning about. I do make frequent “use” of XHTML and CSS, but to talk about it as something I’m actively experiencing is similar to someone talking about how handwriting is important. My job is a software developer, and I do use a WYSIWYG (Visual Studio 2008,) but I didn’t always, and I am in markup mode as much if not more than I am in designer mode.

Of course, just like with my handwriting, there is always room for improvement and understanding of the matter. I think one of the reasons it is harder for me to talk about XHTML and CSS is because I’ve become “set in my ways” when it comes to using both technologies, and I also take a lot of things for granted. For example, my default compatibility level is set to XHTML 1.0 Transitional. Now I know this is a flavor of the XHTML Standards, but I also have the option of using XHTML 1.1, and I had never really given much thought about their differences.

For that matter, I have been using XHTML 1.0 Transitional for quite a while, but never gave though as to when I switched to it from HTML4. Well here are the main differences.

First, XHTMLis different from HTML in that XHTML conforms to XML standards. This means all tags must be closed, and tagnames are case sensitive. It also means that tag attributes must have values enclosed in quotes. Even simple tags that didn’t require closing before must be closed, for example <br />. The slash at the end is actually a shortcut for closing a tag without create a seperate closing tag.

XHTML 1.0 Transitional is different from XHTML 1.0 strict in that it is designed to bridge the gap between HTML and XHTML 1.0 strict by allowing tags that control presentation. XHTML 1.0 strict is free of any markup used to define layout.

I haven’t been using XHTML nearly as long as I’ve been using CSS. Both technologies are extremely important, and the adoption of strict compliance towards both is also important. I primarily use Internet Explorer, and it has a history of allowing developers to use both proprietary tags and internally reformatting tags that are out of compliance so pages still render correctly. IE isn’t alone in this as (moreso in the past versions of browsers than now) many other browsers interpret these standards in slightly different ways. All of them seem to be moving towards compliance (even IE, which will ship in “standards compliance” mode by default) which is a great thing for developers, because rather than having to detect Safari, IE, or older browsers to reformat pages or change code so that it still works, we will be able to develop a site once and have it display properly among all the compliant browsers.

In the past, addressing these slight differences has been a large time-sink. It’s gotten easier with many of the subtle nuances being well documented, and there are very valuable tools that check a pages final rendering across many different browsers, so gone are the days of labs with a dozen computers (and later just dozens of virtual machines) and OS’s with different browsers on each (which I’ve set up before.) But it would still be wonderful if developing a site was all about the design and content, and much less about its delivery.

Posted by: jimmydlg | December 3, 2008

The Podcast

Well.. here’s my first ever podcast.

The Podcast

Posted by: jimmydlg | November 30, 2008

Podcasting

While I don’t have a lot of experience consuming podcasts, I have tried on occasion to add them to my daily routine. I definitely think PodCasts are a valuable tool for gathering information, but I just haven’t been able to regularly listen to them.

PodCasts of course were a natural progression to the influx of new technologies and methods brought about with the wider adoption of the Web 2.0 paradigm. As people began posting more and more audio blogs, it was apparent that the need to integrate them into a more refined syndication technology was necessary. Adam Curry is generally credited with starting the process that eventually led to the standardization of the RSS feed for PodCasting.

Everyone these days seems to have a PodCast. This seems a little counterintuitive to what I’ve been learning about Web 2.0 technologies. From what I’ve seen so far, whether you’re using GarageBand on the Mac, or Audacity on Windows, PodCasting isn’t as simple as some of the other web-based tools I’ve seen for its relatives, such as blogging (like blogging on WordPress.) It’s not overly complicated, but it is definitely a bit more involved and does require more than the standard hardware as well (such as a good quality microphone.) Still, it seems to draw in quite a number of publishers, which is great for the community as a whole.

I guess it could be done quickly and with budget brand equipment, but the more popular PodCasts I’ve listened to myself all have a “studio” feel to them, even when I know they’re not studio produced.

I recently had to produce an 8 minute training video for a large company. I had to script my own content, and then use a fairly decent condenser microphone and Adobe SoundBooth CS4 to record myself and finally add it to screen captured video. I found this to be surprisingly difficult.

After getting over the initial shock of talking alone to my computer, I found that I kept stumbling over my words and having problems phrasing my ideas. It took me quite a bit of editing to remove bad takes and insert corrections. It seems like a lot of PodCasts I hear, especially when there is more than one person speaking sound very natural and unscripted, but I guess that ability just comes with practice. After struggling to produce 8 minutes of audio, the idea of producing 30 minutes or an hour each week, or more than once a week seems daunting to me.

I’ve also seen several references to the types of hardware needed to PodCast seriously, so in addition to the time it takes, some extra hardware besides a mic may be necessary too.

Still, I think personally I could benefit from being both a PodCasting subscribe, and maybe some day a publisher. While I’m certainly not immersed in PodCasting at the moment, since I’ve begun blogging, I’ve gained a new respect for it (as both a publisher and subscriber,) so I’m sure there are many unfound benefits laying in wait for me in the world of PodCasting.

Posted by: jimmydlg | November 19, 2008

Information Trapping

Information Trapping is a valuable resource available in many different forms on the web. It allows us to create alerts or watch for particular types of events, media, or information and then have that information sent to us once it’s found. As Tara Calishain put it so simply in an interview, “The information in which you’re interested will come to you. You will not have to go looking for it on a regular basis.”

I’ve been using the technique of information trapping for quite some time, mainly through the use of Google Alerts, but also through various other programs (some of them stand-alone applications)

Google Alerts has been a great benefit to me.. I came across it on accident by using the News section of Google to look for topics that interested me. At the bottom of the news search is a link that says “Get the latest new on <whatever you searched” and once you click it, it easily takes you through the steps of creating an alert.

These alerts have kept me informed about products, news, and issues that I wanted to be reminded of later even if I knew they were far out on the horizon.

For example, I have been patiently waiting for the product HDPC-20 from DirecTV and Microsoft Windows Media Center. This product will allow me to integrate DirecTV into Media center so that I can use Media Center as my PVR and not the clunky slow HD DVR from DirecTV. When I first started looking for information on this device, I discovered it wouldn’t be available until sometime in 2010, and I first heard about it in 2007.

Ever since then, I’ve had a Google Alert running, and each time new information surfaces on the HDPC-20 I’m instantly informed.

Another kind of Information Trapping I use frequently is in my job. I use a program who’s sole purpose is to monitor information from a diverse set of origins, watch for particular conditions, and alert me when those conditions change. The program is called IP Sentry, and it makes keeping track of the status and condition of all my servers and software platforms easy. Without it, I would spend a great deal of time checking the condition of various devices, platforms, and software packages, just to mostly find out that “everything’s ok,” but with this package, I know instantly when something’s wrong, even if I’m not paying attention to it on a regular basis.

A while back I extended the use of this software to watch Best Buy’s product page on a Nikon D200. When they were first released, they were extremely rare, and (thanks to Google Alerts) I had come across some information that Best Buy regular received small shipments and they would go online and then sell out quickly from their website. I modified one of the monitoring agents in the IP Sentry software to check Best Buy’s page to see when the “Out of Stock” message disappeared and have it send me a text message. Sure enough, at 2AM one morning my alert woke me up, I ran to the computer and bought one of the last ones.

During my research of information trapping, I came across Furl, which seemed like a site that literally traps information. When you bookmark a page on Furl, much like you would with Delicious.com, it not only saves your bookmark, but it saves a copy of the page in a local cache for you as well. That way, even if the information on the page changes, or the page is removed, you still have access to the original copy. I plan on exploring this more to see how useful it will be.. but for now the idea seems to be appealing, and I certainly haven’t minded maintaining my delicious bookmarks.

Older Posts »

Categories