Web-app dev1, Web development and robots.txt

First Step

This writing is about my first step to make my own mobile application. For very beginner like me, I want to share my experience how I developed my web-app, K-pop celeb like me, from my first step.



1. React Native

  • I chose to use React Native to develop mobile application, because I had no programming background at all and I didn't want to spend much time for both iOS and Android native language. (Swift, Kotlin etc.) So React Native was just perfect for me to develop mobile application with JavaScript for both platform, iOS and Android.
  • As mentioned, I had no basic knowledge for coding at all so I studied from very basic by myself. HTML, CSS, JavaScript, React and React Native in order.
  • Before learning that, I recommend that just build your application following some youtube or some guideline even though you don't have any background. After that study the language you want to use, then you will have more fun for it.


2. What I prepared

  • MacBook: I had no MacBook but bought it because iOS application can be developed only in MacBook. Android can be developed both in Window and Mac.
  • Node.js
  • Visual Studio: Code editor
  • homebrew
  • Git scm
  • Android Studio
  • Xcode
  • nvm (Node version manager): nvm allows you to quickly install and use different versions of node via the command line.


3. Web development and deploy



4. Register web in search engine

  • This is not necessary for web-app development, but once you make your own web, it would be better if it can be searched by others in search engine too.
  • To make my web to be searched in search engine, we need to register it in search engine.
    • For google, it is "Google search console"
    • For Naver (what Korean use mostly), it is "Naver webmaster tool"

  •  How to register: Copy the meta tag and paste it in head tag in your html.


  • Once you paste it and register it, then virtual robot (= crawler) browse webpages and gathers information to make it be exposed in search engine. In google, it is said as Googlebot, in Naver, it is called Yeti.
  • What if that robot just browse the webpages without rule? There can be some people who don`t want show and have some information, which they don`t want it to be exposed to others. Due to that, there is international rule like "robots.txt". When Crawler browse the webs, it finds the robots.txt first, and see that if some information here is allowed or not. Once the crawler see the allow sign, it can browse further in that web.


5. robots.txt

  • robots.txt example:
1
2
3
4
5
6
user-agent: *
Disallow: /search
Allow: /
Sitemap: https://yourblogname.blogspot.com/sitemap.xml
User-agent: Mediapartners-Google
Allow: /

  1. User-agent: *, it is for all robot that browse your webs.
  2. Disallow: /search, it is to tell robot that you disallow robots to browse the web which include URL, /search. (it is about category, so robot does not need to browse it). you can put here other things like /manage, /profile which is about your private information you may not want to show others.
  3. Allow: / , it is to tell robot you allow to browse your web.
  4. Sitemap: , it is the xml file about your webpage for crawling. Simply it is literally a map for crawler. Crawler see the map and know how to browse your web. With the sitemap, crawler can browse your webs more efficiently.
  5. User-agent: Mediapartners-Google, It is to tell AdSense robot (Mediapartners-Google) some thing below.
  6. Allow: / , it is to tell AdSense robot that you allow this robot to browse your web. 



  • Note that if your web for web-app includes AdSense, then do not use AdMob for your web-app. You should not use both AdSense for web and AdMob for your web-app at the same time. It is prohibited. Just use only AdMob or only AdSense!

Comments

Post a Comment

Mostly viewed post

Web-app dev4, Google AdMob (Banner and Interstitial ads)